Science news from the MIT - Massachusetts Institute of Technology University

Here you find the recent daily science news of the the MIT - Massachusetts Institute of Technology University

MIT News - School of Science
MIT news feed about: School of Science
Researchers present bold ideas for AI at MIT Generative AI Impact Consortium kickoff event

Presentations targeted high-impact intersections of AI and other areas, such as health care, business, and education.


Launched in February of this year, the MIT Generative AI Impact Consortium (MGAIC), a presidential initiative led by MIT’s Office of Innovation and Strategy and administered by the MIT Stephen A. Schwarzman College of Computing, issued a call for proposals, inviting researchers from across MIT to submit ideas for innovative projects studying high-impact uses of generative AI models.

The call received 180 submissions from nearly 250 faculty members, spanning all of MIT’s five schools and the college. The overwhelming response across the Institute exemplifies the growing interest in AI and follows in the wake of MIT’s Generative AI Week and call for impact papers. Fifty-five proposals were selected for MGAIC’s inaugural seed grants, with several more selected to be funded by the consortium’s founding company members.

Over 30 funding recipients presented their proposals to the greater MIT community at a kickoff event on May 13. Anantha P. Chandrakasan, chief innovation and strategy officer and dean of the School of Engineering who is head of the consortium, welcomed the attendees and thanked the consortium’s founding industry members.

“The amazing response to our call for proposals is an incredible testament to the energy and creativity that MGAIC has sparked at MIT. We are especially grateful to our founding members, whose support and vision helped bring this endeavor to life,” adds Chandrakasan. “One of the things that has been most remarkable about MGAIC is that this is a truly cross-Institute initiative. Deans from all five schools and the college collaborated in shaping and implementing it.”

Vivek F. Farias, the Patrick J. McGovern (1959) Professor at the MIT Sloan School of Management and co-faculty director of the consortium with Tim Kraska, associate professor of electrical engineering and computer science in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), emceed the afternoon of five-minute lightning presentations.

Presentation highlights include:

“AI-Driven Tutors and Open Datasets for Early Literacy Education,” presented by Ola Ozernov-Palchik, a research scientist at the McGovern Institute for Brain Research, proposed a refinement for AI-tutors for pK-7 students to potentially decrease literacy disparities.

“Developing jam_bots: Real-Time Collaborative Agents for Live Human-AI Musical Improvisation,” presented by Anna Huang, assistant professor of music and assistant professor of electrical engineering and computer science, and Joe Paradiso, the Alexander W. Dreyfoos (1954) Professor in Media Arts and Sciences at the MIT Media Lab, aims to enhance human-AI musical collaboration in real-time for live concert improvisation.

“GENIUS: GENerative Intelligence for Urban Sustainability,” presented by Norhan Bayomi, a postdoc at the MIT Environmental Solutions Initiative and a research assistant in the Urban Metabolism Group, which aims to address the critical gap of a standardized approach in evaluating and benchmarking cities’ climate policies.

Georgia Perakis, the John C Head III Dean (Interim) of the MIT Sloan School of Management and professor of operations management, operations research, and statistics, who serves as co-chair of the GenAI Dean’s oversight group with Dan Huttenlocher, dean of the MIT Schwarzman College of Computing, ended the event with closing remarks that emphasized “the readiness and eagerness of our community to lead in this space.”

“This is only the beginning,” he continued. “We are at the front edge of a historic moment — one where MIT has the opportunity, and the responsibility, to shape the future of generative AI with purpose, with excellence, and with care.”


Island rivers carve passageways through coral reefs

Research shows these channels allow seawater and nutrients to flow in and out, helping to maintain reef health over millions of years.


Volcanic islands, such as the islands of Hawaii and the Caribbean, are surrounded by coral reefs that encircle an island in a labyrinthine, living ring. A coral reef is punctured at points by reef passes — wide channels that cut through the coral and serve as conduits for ocean water and nutrients to filter in and out. These watery passageways provide circulation throughout a reef, helping to maintain the health of corals by flushing out freshwater and transporting key nutrients.

Now, MIT scientists have found that reef passes are shaped by island rivers. In a study appearing today in the journal Geophysical Research Letters, the team shows that the locations of reef passes along coral reefs line up with where rivers funnel out from an island’s coast.

Their findings provide the first quantitative evidence of rivers forming reef passes.  Scientists and explorers had speculated that this may be the case: Where a river on a volcanic island meets the coast, the freshwater and sediment it carries flows toward the reef, where a strong enough flow can tunnel into the surrounding coral. This idea has been proposed from time to time but never quantitatively tested, until now.

“The results of this study help us to understand how the health of coral reefs depends on the islands they surround,” says study author Taylor Perron, the Cecil and Ida Green Professor of Earth, Atmospheric and Planetary Sciences at MIT.

“A lot of discussion around rivers and their impact on reefs today has been negative because of human impact and the effects of agricultural practices,” adds lead author Megan Gillen, a graduate student in the MIT-WHOI Joint Program in Oceanography. “This study shows the potential long-term benefits rivers can have on reefs, which I hope reshapes the paradigm and highlights the natural state of rivers interacting with reefs.”

The study’s other co-author is Andrew Ashton of the Woods Hole Oceanographic Institution.

Drawing the lines

The new study is based on the team’s analysis of the Society Islands, a chain of islands in the South Pacific Ocean that includes Tahiti and Bora Bora. Gillen, who joined the MIT-WHOI program in 2020, was interested in exploring connections between coral reefs and the islands they surround. With limited options for on-site work during the Covid-19 pandemic, she and Perron looked to see what they could learn through satellite images and maps of island topography. They did a quick search using Google Earth and zeroed in on the Society Islands for their uniquely visible reef and island features.

“The islands in this chain have these iconic, beautiful reefs, and we kept noticing these reef passes that seemed to align with deeply embayed portions of the coastline,” Gillen says. “We started asking ourselves, is there a correlation here?”

Viewed from above, the coral reefs that circle some islands bear what look to be notches, like cracks that run straight through a ring. These breaks in the coral are reef passes — large channels that run tens of meters deep and can be wide enough for some boats to pass through. On first look, Gillen noticed that the most obvious reef passes seemed to line up with flooded river valleys — depressions in the coastline that have been eroded over time by island rivers that flow toward the ocean. She wondered whether and to what extent island rivers might shape reef passes.

“People have examined the flow through reef passes to understand how ocean waves and seawater circulate in and out of lagoons, but there have been no claims of how these passes are formed,” Gillen says. “Reef pass formation has been mentioned infrequently in the literature, and people haven’t explored it in depth.”

Reefs unraveled

To get a detailed view of the topography in and around the Society Islands, the team used data from the NASA Shuttle Radar Topography Mission — two radar antennae that flew aboard the space shuttle in 1999 and measured the topography across 80 percent of the Earth’s surface.

The researchers used the mission’s topographic data in the Society Islands to create a map of every drainage basin along the coast of each island, to get an idea of where major rivers flow or once flowed. They also marked the locations of every reef pass in the surrounding coral reefs. They then essentially “unraveled” each island’s coastline and reef into a straight line, and compared the locations of basins versus reef passes.

“Looking at the unwrapped shorelines, we find a significant correlation in the spatial relationship between these big river basins and where the passes line up,” Gillen says. “So we can say that statistically, the alignment of reef passes and large rivers does not seem random. The big rivers have a role in forming passes.”

As for how rivers shape the coral conduits, the team has two ideas, which they call, respectively, reef incision and reef encroachment. In reef incision, they propose that reef passes can form in times when the sea level is relatively low, such that the reef is exposed above the sea surface and a river can flow directly over the reef. The water and sediment carried by the river can then erode the coral, progressively carving a path through the reef.

When sea level is relatively higher, the team suspects a reef pass can still form, through reef encroachment. Coral reefs naturally live close to the water surface, where there is light and opportunity for photosynthesis. When sea levels rise, corals naturally grow upward and inward toward an island, to try to “catch up” to the water line.

“Reefs migrate toward the islands as sea levels rise, trying to keep pace with changing average sea level,” Gillen says.

However, part of the encroaching reef can end up in old river channels that were previously carved out by large rivers and that are lower than the rest of the island coastline. The corals in these river beds end up deeper than light can extend into the water column, and inevitably drown, leaving a gap in the form of a reef pass.

“We don’t think it’s an either/or situation,” Gillen says. “Reef incision occurs when sea levels fall, and reef encroachment happens when sea levels rise. Both mechanisms, occurring over dozens of cycles of sea-level rise and island evolution, are likely responsible for the formation and maintenance of reef passes over time.”

The team also looked to see whether there were differences in reef passes in older versus younger islands. They observed that younger islands were surrounded by more reef passes that were spaced closer together, versus older islands that had fewer reef passes that were farther apart.

As islands age, they subside, or sink, into the ocean, which reduces the amount of land that funnels rainwater into rivers. Eventually, rivers are too weak to keep the reef passes open, at which point, the ocean likely takes over, and incoming waves could act to close up some passes.

Gillen is exploring ideas for how rivers, or river-like flow, can be engineered to create paths through coral reefs in ways that would promote circulation and benefit reef health.

“Part of me wonders: If you had a more persistent flow, in places where you don’t naturally have rivers interacting with the reef, could that potentially be a way to increase health, by incorporating that river component back into the reef system?” Gillen says. “That’s something we’re thinking about.”

This research was supported, in part, by the WHOI Watson and Von Damm fellowships.


When Earth iced over, early life may have sheltered in meltwater ponds

Modern-day analogs in Antarctica reveal ponds teeming with life similar to early multicellular organisms.


When the Earth froze over, where did life shelter? MIT scientists say one refuge may have been pools of melted ice that dotted the planet’s icy surface.

In a study appearing today in Nature Communications, the researchers report that 635 million to 720 million years ago, during periods known as “Snowball Earth,” when much of the planet was covered in ice, some of our ancient cellular ancestors could have waited things out in meltwater ponds.

The scientists found that eukaryotes — complex cellular lifeforms that eventually evolved into the diverse multicellular life we see today — could have survived the global freeze by living in shallow pools of water. These small, watery oases may have persisted atop relatively shallow ice sheets present in equatorial regions. There, the ice surface could accumulate dark-colored dust and debris from below, which enhanced its ability to melt into pools. At temperatures hovering around 0 degrees Celsius, the resulting meltwater ponds could have served as habitable environments for certain forms of early complex life.

The team drew its conclusions based on an analysis of modern-day meltwater ponds. Today in Antarctica, small pools of melted ice can be found along the margins of ice sheets. The conditions along these polar ice sheets are similar to what likely existed along ice sheets near the equator during Snowball Earth.

The researchers analyzed samples from a variety of meltwater ponds located on the McMurdo Ice Shelf in an area that was first described by members of Robert Falcon Scott's 1903 expedition as “dirty ice.” The MIT researchers discovered clear signatures of eukaryotic life in every pond. The communities of eukaryotes varied from pond to pond, revealing a surprising diversity of life across the setting. The team also found that salinity plays a key role in the kind of life a pond can host: Ponds that were more brackish or salty had more similar eukaryotic communities, which differed from those in ponds with fresher waters.

“We’ve shown that meltwater ponds are valid candidates for where early eukaryotes could have sheltered during these planet-wide glaciation events,” says lead author Fatima Husain, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “This shows us that diversity is present and possible in these sorts of settings. It’s really a story of life’s resilience.”

The study’s MIT co-authors include Schlumberger Professor of Geobiology Roger Summons and former postdoc Thomas Evans, along with Jasmin Millar of Cardiff University, Anne Jungblut at the Natural History Museum in London, and Ian Hawes of the University of Waikato in New Zealand.

Polar plunge

“Snowball Earth” is the colloquial term for periods of time in Earth history during which the planet iced over. It is often used as a reference to the two consecutive, multi-million-year glaciation events which took place during the Cryogenian Period, which geologists refer to as the time between 635 and 720 million years ago. Whether the Earth was more of a hardened snowball or a softer “slushball” is still up for debate. But scientists are certain of one thing: Most of the planet was plunged into a deep freeze, with average global temperatures of minus 50 degrees Celsius. The question has been: How and where did life survive?

“We’re interested in understanding the foundations of complex life on Earth. We see evidence for eukaryotes before and after the Cryogenian in the fossil record, but we largely lack direct evidence of where they may have lived during,” Husain says. “The great part of this mystery is, we know life survived. We’re just trying to understand how and where.”

There are a number of ideas for where organisms could have sheltered during Snowball Earth, including in certain patches of the open ocean (if such environments existed), in and around deep-sea hydrothermal vents, and under ice sheets. In considering meltwater ponds, Husain and her colleagues pursued the hypothesis that surface ice meltwaters may also have been capable of supporting early eukaryotic life at the time.

“There are many hypotheses for where life could have survived and sheltered during the Cryogenian, but we don’t have excellent analogs for all of them,” Husain notes. “Above-ice meltwater ponds occur on Earth today and are accessible, giving us the opportunity to really focus in on the eukaryotes which live in these environments.”

Small pond, big life

For their new study, the researchers analyzed samples taken from meltwater ponds in Antarctica. In 2018, Summons and colleagues from New Zealand traveled to a region of the McMurdo Ice Shelf in East Antarctica, known to host small ponds of melted ice, each just a few feet deep and a few meters wide. There, water freezes all the way to the seafloor, in the process trapping dark-colored sediments and marine organisms. Wind-driven loss of ice from the surface creates a sort of conveyer belt that brings this trapped debris to the surface over time, where it absorbs the sun’s warmth, causing ice to melt, while surrounding debris-free ice reflects incoming sunlight, resulting in the formation of shallow meltwater ponds.

The bottom of each pond is lined with mats of microbes that have built up over years to form layers of sticky cellular communities.

“These mats can be a few centimeters thick, colorful, and they can be very clearly layered,” Husain says.

These microbial mats are made up of cyanobacteria, prokaryotic, single-celled photosynthetic organisms that lack a cell nucleus or other organelles. While these ancient microbes are known to survive within some of the the harshest environments on Earth including meltwater ponds, the researchers wanted to know whether eukaryotes — complex organisms that evolved a cell nucleus and other membrane bound organelles — could also weather similarly challenging circumstances. Answering this question would take more than a microscope, as the defining characteristics of the microscopic eukaryotes present among the microbial mats are too subtle to distinguish by eye.

To characterize the eukaryotes, the team analyzed the mats for specific lipids they make called sterols, as well as genetic components called ribosomal ribonucleic acid (rRNA), both of which can be used to identify organisms with varying degrees of specificity. These two independent sets of analyses provided complementary fingerprints for certain eukaryotic groups. As part of the team’s lipid research, they found many sterols and rRNA genes closely associated with specific types of algae, protists, and microscopic animals among the microbial mats. The researchers were able to assess the types and relative abundance of lipids and rRNA genes from pond to pond, and found the ponds hosted a surprising diversity of eukaryotic life.

“No two ponds were alike,” Husain says. “There are repeating casts of characters, but they’re present in different abundances. And we found diverse assemblages of eukaryotes from all the major groups in all the ponds studied. These eukaryotes are the descendants of the eukaryotes that survived the Snowball Earth. This really highlights that meltwater ponds during Snowball Earth could have served as above-ice oases that nurtured the eukaryotic life that enabled the diversification and proliferation of complex life — including us — later on.”

This research was supported, in part, by the NASA Exobiology Program, the Simons Collaboration on the Origins of Life, and a MISTI grant from MIT-New Zealand.


QS ranks MIT the world’s No. 1 university for 2025-26

Ranking at the top for the 14th year in a row, the Institute also places first in 11 subject areas.


MIT has again been named the world’s top university by the QS World University Rankings, which were announced today. This is the 14th year in a row MIT has received this distinction.

The full 2026 edition of the rankings — published by Quacquarelli Symonds, an organization specializing in education and study abroad — can be found at TopUniversities.com. The QS rankings are based on factors including academic reputation, employer reputation, citations per faculty, student-to-faculty ratio, proportion of international faculty, and proportion of international students.

MIT was also ranked the world’s top university in 11 of the subject areas ranked by QS, as announced in March of this year.

The Institute received a No. 1 ranking in the following QS subject areas: Chemical Engineering; Civil and Structural Engineering; Computer Science and Information Systems; Data Science and Artificial Intelligence; Electrical and Electronic Engineering; Linguistics; Materials Science; Mechanical, Aeronautical, and Manufacturing Engineering; Mathematics; Physics and Astronomy; and Statistics and Operational Research.

MIT also placed second in seven subject areas: Accounting and Finance; Architecture/Built Environment; Biological Sciences; Business and Management Studies; Chemistry; Earth and Marine Sciences; and Economics and Econometrics.


The MIT Press acquires University Science Books from AIP Publishing

The textbook publisher will transfer to the MIT Press next month, in time for fall 2025 course adoptions.



The MIT Press announces the acquisition of textbook publisher University Science Books from AIP Publishing, a subsidiary of the American Institute of Physics (AIP).

University Science Books was founded in 1978 to publish intermediate- and advanced-level science and reference books by respected authors, published with the highest design and production standards, and priced as affordably as possible. Over the years, USB’s authors have acquired international followings, and its textbooks in chemistry, physics, and astronomy have been recognized as the gold standard in their respective disciplines. USB was acquired by AIP Publishing in 2021.

Bestsellers include John Taylor’s “Classical Mechanics,” the No. 1 adopted text for undergrad mechanics courses in the United States and Canada, and his “Introduction to Error Analysis;” and Don McQuarrie’s “Physical Chemistry: A Molecular Approach” (commonly known as “Big Red”), the second-most adopted physical chemistry textbook in the U.S.

“We are so pleased to have found a new home for USB’s prestigious list of textbooks in the sciences,” says Alix Vance, CEO of AIP Publishing. “With its strong STEM focus, academic rigor, and high production standards, the MIT Press is the perfect partner to continue the publishing legacy of University Science Books.” 

“This acquisition is both a brand and content fit for the MIT Press,” says Amy Brand, director and publisher of the MIT Press. “USB’s respected science list will complement our long-established publishing history of publishing foundational texts in computer science, finance, and economics.”

The MIT Press will take over the USB list as of July 1, with inventory transferring to Penguin Random House Publishing Services, the MIT Press’ sales and distribution partner.

For details regarding University Science Books titles, inventory, and how to order, please contact the MIT Press

Established in 1962, The MIT Press is one of the largest and most distinguished university presses in the world and a leading publisher of books and journals at the intersection of science, technology, art, social science, and design.

AIP Publishing is a wholly owned not-for-profit subsidiary of the AIP and supports the charitable, scientific, and educational purposes of AIP through scholarly publishing activities on its behalf and on behalf of our publishing partners.


A sounding board for strengthening the student experience

Composed of “computing bilinguals,” the Undergraduate Advisory Group provides vital input to help advance the mission of the MIT Schwarzman College of Computing.


During his first year at MIT in 2021, Matthew Caren ’25 received an intriguing email inviting students to apply to become members of the MIT Schwarzman College of Computing’s (SCC) Undergraduate Advisory Group (UAG). He immediately shot off an application.

Caren is a jazz musician who majored in computer science and engineering, and minored in music and theater arts. He was drawn to the college because of its focus on the applied intersections between computing, engineering, the arts, and other academic pursuits. Caron eagerly joined the UAG and stayed on it all four years at MIT.

First formed in April 2020, the group brings together a committee of around 25 undergraduate students representing a broad swath of both traditional and blended majors in electrical engineering and computer science (EECS) and other computing-related programs. They advise the college’s leadership on issues, offer constructive feedback, and serve as a sounding board for innovative new ideas.

“The ethos of the UAG is the ethos of the college itself,” Caren explains. “If you very intentionally bring together a bunch of smart, interesting, fun-to-be-around people who are all interested in completely diverse things, you'll get some really cool discussions and interactions out of it.”

Along the way, he’s also made “dear” friends and found true colleagues. In the group’s monthly meetings with SCC Dean Dan Huttenlocher and Deputy Dean Asu Ozdaglar, who is also the department head of EECS, UAG members speak openly about challenges in the student experience and offer recommendations to guests from across the Institute, such as faculty who are developing new courses and looking for student input.

“This group is unique in the sense that it’s a direct line of communication to the college’s leadership,” says Caren. “They make time in their insanely busy schedules for us to explain where the holes are, and what students’ needs are, directly from our experiences.”

“The students in the group are keenly interested in computer science and AI, especially how these fields connect with other disciplines. They’re also passionate about MIT and eager to enhance the undergraduate experience. Hearing their perspective is refreshing — their honesty and feedback have been incredibly helpful to me as dean,” says Huttenlocher.

“Meeting with the students each month is a real pleasure. The UAG has been an invaluable space for understanding the student experience more deeply. They engage with computing in diverse ways across MIT, so their input on the curriculum and broader college issues has been insightful,” Ozdaglar says.

UAG program manager Ellen Rushman says that “Asu and Dan have done an amazing job cultivating a space in which students feel safe bringing up things that aren’t positive all the time.” The group’s suggestions are frequently implemented, too.

For example, in 2021, Skidmore, Owings & Merrill, the architects designing the new SCC building, presented their renderings at a UAG meeting to request student feedback. Their original interiors layout offered very few of the hybrid study and meeting booths that are so popular in today’s first floor lobby.

Hearing strong UAG opinions about the sort of open-plan, community-building spaces that students really valued was one of the things that created the change to the current floor plan. “It’s super cool walking into the personalized space and seeing it constantly being in use and always crowded. I actually feel happy when I can’t get a table,” says Caren, who has just ended his tenure as co-chair of the group in preparation for graduation.

Caren’s co-chair, rising senior Julia Schneider, who is double-majoring in artificial intelligence and decision-making and mathematics, joined the UAG as a first-year to understand more about the college’s mission of fostering interdepartmental collaborations.

“Since I am a student in electrical engineering and computer science, but I conduct research in mechanical engineering on robotics, the college’s mission of fostering interdepartmental collaborations and uniting them through computing really spoke to my personal experiences in my first year at MIT,” Schneider says.

During her time on the UAG, members have joined subgroups focused around achieving different programmatic goals of the college, such as curating a public lecture series for the 2025-26 academic year to give MIT students exposure to faculty who conduct research in other disciplines that relate to computing.

At one meeting, after hearing how challenging it is for students to understand all the possible courses to take during their tenure, Schneider and some UAG peers formed a subgroup to find a solution.

The students agreed that some of the best courses they’ve taken at MIT, or pairings of courses that really struck a chord with their interdisciplinary interests, came because they spoke to upperclassmen and got recommendations. “This kind of tribal knowledge doesn’t really permeate to all of MIT,” Schneider explains.

For the last six months, Schneider and the subgroup have been working on a course visualization website, NerdXing, which came out of these discussions.

Guided by Rob Miller, Distinguished Professor of Computer Science in EECS, the subgroup used a dataset of EECS course enrollments over the past decade to develop a different type of tool than MIT students typically use, such as CourseRoad and others.

Miller, who regularly attends the UAG meetings in his role as the education officer for the college’s cross-cutting initiative, Common Ground for Computing Education, comments, “the really cool idea here is to help students find paths that were taken by other people who are like them — not just interested in computer science, but maybe also in biology, or music, or economics, or neuroscience. It's very much in the spirit of the College of Computing — applying data-driven computational methods, in support of students with wide-ranging computational interests.”

Opening the NerdXing pilot, which is set to roll out later this spring, Schneider gave a demo. She explains that if you are a computer science (CS) major and would like to create a visual presenting potential courses for you, after you select your major and a class of interest, you can expand a huge graph presenting all the possible courses your CS peers have taken over the past decade.

She clicked on class 18.404 (Theory of Computation) as the starting class of interest, which led to class 6.7900 (Machine Learning), and then unexpectedly to 21M.302 (Harmony and Counterpoint II), an advanced music class.

“You start to see aggregate statistics that tell you how many students took each course, and you can further pare it down to see the most popular courses in CS or follow lines of red dots between courses to see the typical sequence of classes taken.”

By getting granular on the graph, users begin to see classes that they have probably never heard anyone talking about in their program. “I think that one of the reasons you come to MIT is to be able to take cool stuff exactly like this,” says Schneider.

The tool aims to show students how they can choose classes that go far beyond just filling degree requirements. It’s just one example of how UAG is empowering students to strengthen the college and the experiences it offers them.

“We are MIT students. We have the skills to build solutions,” Schneider says. “This group of people not only brings up ways in which things could be better, but we take it into our own hands to fix things.”


Closing in on superconducting semiconductors

Plasma Science and Fusion Center researchers created a superconducting circuit that could one day replace semiconductor components in quantum and high-performance computing systems.


In 2023, about 4.4 percent (176 terawatt-hours) of total energy consumption in the United States was by data centers that are essential for processing large quantities of information. Of that 176 TWh, approximately 100 TWh (57 percent) was used by CPU and GPU equipment. Energy requirements have escalated substantially in the past decade and will only continue to grow, making the development of energy-efficient computing crucial. 

Superconducting electronics have arisen as a promising alternative for classical and quantum computing, although their full exploitation for high-end computing requires a dramatic reduction in the amount of wiring linking ambient temperature electronics and low-temperature superconducting circuits. To make systems that are both larger and more streamlined, replacing commonplace components such as semiconductors with superconducting versions could be of immense value. It’s a challenge that has captivated MIT Plasma Science and Fusion Center senior research scientist Jagadeesh Moodera and his colleagues, who described a significant breakthrough in a recent Nature Electronics paper, “Efficient superconducting diodes and rectifiers for quantum circuitry.”

Moodera was working on a stubborn problem. One of the critical long-standing requirements is the need for the efficient conversion of AC currents into DC currents on a chip while operating at the extremely cold cryogenic temperatures required for superconductors to work efficiently. For example, in superconducting “energy-efficient rapid single flux quantum” (ERSFQ) circuits, the AC-to-DC issue is limiting ERSFQ scalability and preventing their use in larger circuits with higher complexities. To respond to this need, Moodera and his team created superconducting diode (SD)-based superconducting rectifiers — devices that can convert AC to DC on the same chip. These rectifiers would allow for the efficient delivery of the DC current necessary to operate superconducting classical and quantum processors.

Quantum computer circuits can only operate at temperatures close to 0 kelvins (absolute zero), and the way power is supplied must be carefully controlled to limit the effects of interference introduced by too much heat or electromagnetic noise. Most unwanted noise and heat come from the wires connecting cold quantum chips to room-temperature electronics. Instead, using superconducting rectifiers to convert AC currents into DC within a cryogenic environment reduces the number of wires, cutting down on heat and noise and enabling larger, more stable quantum systems.

In a 2023 experiment, Moodera and his co-authors developed SDs that are made of very thin layers of superconducting material that display nonreciprocal (or unidirectional) flow of current and could be the superconducting counterpart to standard semiconductors. Even though SDs have garnered significant attention, especially since 2020, up until this point the research has focused only on individual SDs for proof of concept. The group’s 2023 paper outlined how they created and refined a method by which SDs could be scaled for broader application. 

Now, by building a diode bridge circuit, they demonstrated the successful integration of four SDs and realized AC-to-DC rectification at cryogenic temperatures. 

The new approach described in their recent Nature Electronics paper will significantly cut down on the thermal and electromagnetic noise traveling from ambient into cryogenic circuitry, enabling cleaner operation. The SDs could also potentially serve as isolators/circulators, assisting in insulating qubit signals from external influence. The successful assimilation of multiple SDs into the first integrated SD circuit represents a key step toward making superconducting computing a commercial reality. 

“Our work opens the door to the arrival of highly energy-efficient, practical superconductivity-based supercomputers in the next few years,” says Moodera. “Moreover, we expect our research to enhance the qubit stability while boosting the quantum computing program, bringing its realization closer." Given the multiple beneficial roles these components could play, Moodera and his team are already working toward the integration of such devices into actual superconducting logic circuits, including in dark matter detection circuits that are essential to the operation of experiments at CERN and LUX-ZEPLIN in at the Berkeley National Lab.

This work was partially funded by MIT Lincoln Laboratory’s Advanced Concepts Committee, the U.S. National Science Foundation, U.S. Army Research Office, and U.S. Air Force Office of Scientific Research. 

This work was carried out, in part, through the use of MIT.nano’s facilities.


After more than a decade of successes, ESI’s work will spread out across the Institute

John Fernandez will step down as head of the Environmental Solutions Initiative, as its components will become part of the Climate Project and other entities.


MIT’s Environmental Solutions Initiative (ESI), a pioneering cross-disciplinary body that helped give a major boost to sustainability and solutions to climate change at MIT, will close as a separate entity at the end of June. But that’s far from the end for its wide-ranging work, which will go forward under different auspices. Many of its key functions will become part of MIT’s recently launched Climate Project. John Fernandez, head of ESI for nearly a decade, will return to the School of Architecture and Planning, where some of ESI’s important work will continue as part of a new interdisciplinary lab.

When the ideas that led to the founding of MIT’s Environmental Solutions Initiative first began to be discussed, its founders recall, there was already a great deal of work happening at MIT relating to climate change and sustainability. As Professor John Sterman of the MIT Sloan School of Management puts it, “there was a lot going on, but it wasn’t integrated. So the whole added up to less than the sum of its parts.”

ESI was founded in 2014 to help fill that coordinating role, and in the years since it has accomplished a wide range of significant milestones in research, education, and communication about sustainable solutions in a wide range of areas. Its founding director, Professor Susan Solomon, helmed it for its first year, and then handed the leadership to Fernandez, who has led it since 2015.

“There wasn’t much of an ecosystem [on sustainability] back then,” Solomon recalls. But with the help of ESI and some other entities, that ecosystem has blossomed. She says that Fernandez “has nurtured some incredible things under ESI,” including work on nature-based climate solutions, and also other areas such as sustainable mining, and reduction of plastics in the environment.

Desiree Plata, director of MIT’s Climate and Sustainability Consortium and associate professor of civil and environmental engineering, says that one key achievement of the initiative has been in “communication with the external world, to help take really complex systems and topics and put them in not just plain-speak, but something that’s scientifically rigorous and defensible, for the outside world to consume.”

In particular, ESI has created three very successful products, which continue under the auspices of the Climate Project. These include the popular TIL Climate Podcast, the Webby Award-winning Climate Portal website, and the online climate primer developed with Professor Kerry Emanuel. “These are some of the most frequented websites at MIT,” Plata says, and “the impact of this work on the global knowledge base cannot be overstated.”

Fernandez says that ESI has played a significant part in helping to catalyze what has become “a rich institutional landscape of work in sustainability and climate change” at MIT. He emphasizes three major areas where he feels the ESI has been able to have the most impact: engaging the MIT community, initiating and stewarding critical environmental research, and catalyzing efforts to promote sustainability as fundamental to the mission of a research university.

Engagement of the MIT community, he says, began with two programs: a research seed grant program and the creation of MIT’s undergraduate minor in environment and sustainability, launched in 2017.

ESI also created a Rapid Response Group, which gave students a chance to work on real-world projects with external partners, including government agencies, community groups, nongovernmental organizations, and businesses. In the process, they often learned why dealing with environmental challenges in the real world takes so much longer than they might have thought, he says, and that a challenge that “seemed fairly straightforward at the outset turned out to be more complex and nuanced than expected.”

The second major area, initiating and stewarding environmental research, grew into a set of six specific program areas: natural climate solutions, mining, cities and climate change, plastics and the environment, arts and climate, and climate justice.

These efforts included collaborations with a Nobel Peace Prize laureate, three successive presidential administrations from Colombia, and members of communities affected by climate change, including coal miners, indigenous groups, various cities, companies, the U.N., many agencies — and the popular musical group Coldplay, which has pledged to work toward climate neutrality for its performances. “It was the role that the ESI played as a host and steward of these research programs that may serve as a key element of our legacy,” Fernandez says.

The third broad area, he says, “is the idea that the ESI as an entity at MIT would catalyze this movement of a research university toward sustainability as a core priority.” While MIT was founded to be an academic partner to the industrialization of the world, “aren’t we in a different world now? The kind of massive infrastructure planning and investment and construction that needs to happen to decarbonize the energy system is maybe the largest industrialization effort ever undertaken. Even more than in the recent past, the set of priorities driving this have to do with sustainable development.”

Overall, Fernandez says, “we did everything we could to infuse the Institute in its teaching and research activities with the idea that the world is now in dire need of sustainable solutions.”

Fernandez “has nurtured some incredible things under ESI,” Solomon says. “It’s been a very strong and useful program, both for education and research.” But it is appropriate at this time to distribute its projects to other venues, she says. “We do now have a major thrust in the Climate Project, and you don’t want to have redundancies and overlaps between the two.”

Fernandez says “one of the missions of the Climate Project is really acting to coalesce and aggregate lots of work around MIT.” Now, with the Climate Project itself, along with the Climate Policy Center and the Center for Sustainability Science and Strategy, it makes more sense for ESI’s climate-related projects to be integrated into these new entities, and other projects that are less directly connected to climate to take their places in various appropriate departments or labs, he says.

“We did enough with ESI that we made it possible for these other centers to really flourish,” he says. “And in that sense, we played our role.”

As of June 1, Fernandez has returned to his role as professor of architecture and urbanism and building technology in the School of Architecture and Planning, where he directs the Urban Metabolism Group. He will also be starting up a new group called Environment ResearchAction (ERA) to continue ESI work in cities, nature, and artificial intelligence. 


Decarbonizing steel is as tough as steel

But a new study shows how advanced steelmaking technologies could substantially reduce carbon emissions.


The long-term aspirational goal of the Paris Agreement on climate change is to cap global warming at 1.5 degrees Celsius above preindustrial levels, and thereby reduce the frequency and severity of floods, droughts, wildfires, and other extreme weather events. Achieving that goal will require a massive reduction in global carbon dioxide (CO2) emissions across all economic sectors. A major roadblock, however, could be the industrial sector, which accounts for roughly 25 percent of global energy- and process-related CO2 emissions — particularly within the iron and steel sector, industry’s largest emitter of CO2.

Iron and steel production now relies heavily on fossil fuels (coal or natural gas) for heat, converting iron ore to iron, and making steel strong. Steelmaking could be decarbonized by a combination of several methods, including carbon capture technology, the use of low- or zero-carbon fuels, and increased use of recycled steel. Now a new study in the Journal of Cleaner Production systematically explores the viability of different iron-and-steel decarbonization strategies.

Today’s strategy menu includes improving energy efficiency, switching fuels and technologies, using more scrap steel, and reducing demand. Using the MIT Economic Projection and Policy Analysis model, a multi-sector, multi-region model of the world economy, researchers at MIT, the University of Illinois at Urbana-Champaign, and ExxonMobil Technology and Engineering Co. evaluate the decarbonization potential of replacing coal-based production processes with electric arc furnaces (EAF), along with either scrap steel or “direct reduced iron” (DRI), which is fueled by natural gas with carbon capture and storage (NG CCS DRI-EAF) or by hydrogen (H2 DRI-EAF).

Under a global climate mitigation scenario aligned with the 1.5 C climate goal, these advanced steelmaking technologies could result in deep decarbonization of the iron and steel sector by 2050, as long as technology costs are low enough to enable large-scale deployment. Higher costs would favor the replacement of coal with electricity and natural gas, greater use of scrap steel, and reduced demand, resulting in a more-than-50-percent reduction in emissions relative to current levels. Lower technology costs would enable massive deployment of NG CCS DRI-EAF or H2 DRI-EAF, reducing emissions by up to 75 percent.

Even without adoption of these advanced technologies, the iron-and-steel sector could significantly reduce its CO2 emissions intensity (how much CO2 is released per unit of production) with existing steelmaking technologies, primarily by replacing coal with gas and electricity (especially if it is generated by renewable energy sources), using more scrap steel, and implementing energy efficiency measures.

“The iron and steel industry needs to combine several strategies to substantially reduce its emissions by mid-century, including an increase in recycling, but investing in cost reductions in hydrogen pathways and carbon capture and sequestration will enable even deeper emissions mitigation in the sector,” says study supervising author Sergey Paltsev, deputy director of the MIT Center for Sustainability Science and Strategy (MIT CS3) and a senior research scientist at the MIT Energy Initiative (MITEI).

This study was supported by MIT CS3 and ExxonMobil through its membership in MITEI.


Bringing meaning into technology deployment

The MIT Ethics of Computing Research Symposium showcases projects at the intersection of technology, ethics, and social responsibility.


In 15 TED Talk-style presentations, MIT faculty recently discussed their pioneering research that incorporates social, ethical, and technical considerations and expertise, each supported by seed grants established by the Social and Ethical Responsibilities of Computing (SERC), a cross-cutting initiative of the MIT Schwarzman College of Computing. The call for proposals last summer was met with nearly 70 applications. A committee with representatives from every MIT school and the college convened to select the winning projects that received up to $100,000 in funding.

“SERC is committed to driving progress at the intersection of computing, ethics, and society. The seed grants are designed to ignite bold, creative thinking around the complex challenges and possibilities in this space,” said Nikos Trichakis, co-associate dean of SERC and the J.C. Penney Professor of Management. “With the MIT Ethics of Computing Research Symposium, we felt it important to not just showcase the breadth and depth of the research that’s shaping the future of ethical computing, but to invite the community to be part of the conversation as well.”

“What you’re seeing here is kind of a collective community judgment about the most exciting work when it comes to research, in the social and ethical responsibilities of computing being done at MIT,” said Caspar Hare, co-associate dean of SERC and professor of philosophy.

The full-day symposium on May 1 was organized around four key themes: responsible health-care technology, artificial intelligence governance and ethics, technology in society and civic engagement, and digital inclusion and social justice. Speakers delivered thought-provoking presentations on a broad range of topics, including algorithmic bias, data privacy, the social implications of artificial intelligence, and the evolving relationship between humans and machines. The event also featured a poster session, where student researchers showcased projects they worked on throughout the year as SERC Scholars.

Highlights from the MIT Ethics of Computing Research Symposium in each of the theme areas, many of which are available to watch on YouTube, included:

Making the kidney transplant system fairer

Policies regulating the organ transplant system in the United States are made by a national committee that often takes more than six months to create, and then years to implement, a timeline that many on the waiting list simply can’t survive.

Dimitris Bertsimas, vice provost for open learning, associate dean of business analytics, and Boeing Professor of Operations Research, shared his latest work in analytics for fair and efficient kidney transplant allocation. Bertsimas’ new algorithm examines criteria like geographic location, mortality, and age in just 14 seconds, a monumental change from the usual six hours.

Bertsimas and his team work closely with the United Network for Organ Sharing (UNOS), a nonprofit that manages most of the national donation and transplant system through a contract with the federal government. During his presentation, Bertsimas shared a video from James Alcorn, senior policy strategist at UNOS, who offered this poignant summary of the impact the new algorithm has:

“This optimization radically changes the turnaround time for evaluating these different simulations of policy scenarios. It used to take us a couple months to look at a handful of different policy scenarios, and now it takes a matter of minutes to look at thousands and thousands of scenarios. We are able to make these changes much more rapidly, which ultimately means that we can improve the system for transplant candidates much more rapidly.”

The ethics of AI-generated social media content

As AI-generated content becomes more prevalent across social media platforms, what are the implications of disclosing (or not disclosing) that any part of a post was created by AI? Adam Berinsky, Mitsui Professor of Political Science, and Gabrielle Péloquin-Skulski, PhD student in the Department of Political Science, explored this question in a session that examined recent studies on the impact of various labels on AI-generated content.

In a series of surveys and experiments affixing labels to AI-generated posts, the researchers looked at how specific words and descriptions impacted users’ perception of deception, their intent to engage with the post, and ultimately if the post was true or false.

“The big takeaway from our initial set of findings is that one size doesn’t fit all,” said Péloquin-Skulski. “We found that labeling AI-generated images with a process-oriented label reduces belief in both false and true posts. This is quite problematic, as labeling intends to reduce people’s belief in false information, not necessarily true information. This suggests that labels combining both process and veracity might be better at countering AI-generated misinformation.”

Using AI to increase civil discourse online

“Our research aims to address how people increasingly want to have a say in the organizations and communities they belong to,” Lily Tsai explained in a session on experiments in generative AI and the future of digital democracy. Tsai, Ford Professor of Political Science and director of the MIT Governance Lab, is conducting ongoing research with Alex Pentland, Toshiba Professor of Media Arts arts Sciences, and a larger team.

Online deliberative platforms have recently been rising in popularity across the United States in both public- and private-sector settings. Tsai explained that with technology, it’s now possible for everyone to have a say — but doing so can be overwhelming, or even feel unsafe. First, too much information is available, and secondly, online discourse has become increasingly “uncivil.”

The group focuses on “how we can build on existing technologies and improve them with rigorous, interdisciplinary research, and how we can innovate by integrating generative AI to enhance the benefits of online spaces for deliberation.” They have developed their own AI-integrated platform for deliberative democracy, DELiberation.io, and rolled out four initial modules. All studies have been in the lab so far, but they are also working on a set of forthcoming field studies, the first of which will be in partnership with the government of the District of Columbia.

Tsai told the audience, “If you take nothing else from this presentation, I hope that you’ll take away this — that we should all be demanding that technologies that are being developed are assessed to see if they have positive downstream outcomes, rather than just focusing on maximizing the number of users.”

A public think tank that considers all aspects of AI

When Catherine D’Ignazio, associate professor of urban science and planning, and Nikko Stevens, postdoc at the Data + Feminism Lab at MIT, initially submitted their funding proposal, they weren’t intending to develop a think tank, but a framework — one that articulated how artificial intelligence and machine learning work could integrate community methods and utilize participatory design.

In the end, they created Liberatory AI, which they describe as a “rolling public think tank about all aspects of AI.” D’Ignazio and Stevens gathered 25 researchers from a diverse array of institutions and disciplines who authored more than 20 position papers examining the most current academic literature on AI systems and engagement. They intentionally grouped the papers into three distinct themes: the corporate AI landscape, dead ends, and ways forward.

“Instead of waiting for Open AI or Google to invite us to participate in the development of their products, we’ve come together to contest the status quo, think bigger-picture, and reorganize resources in this system in hopes of a larger societal transformation,” said D’Ignazio.


How the brain solves complicated problems

Study shows humans flexibly deploy different reasoning strategies to tackle challenging mental tasks — offering insights for building machines that think more like us.


The human brain is very good at solving complicated problems. One reason for that is that humans can break problems apart into manageable subtasks that are easy to solve one at a time.

This allows us to complete a daily task like going out for coffee by breaking it into steps: getting out of our office building, navigating to the coffee shop, and once there, obtaining the coffee. This strategy helps us to handle obstacles easily. For example, if the elevator is broken, we can revise how we get out of the building without changing the other steps.

While there is a great deal of behavioral evidence demonstrating humans’ skill at these complicated tasks, it has been difficult to devise experimental scenarios that allow precise characterization of the computational strategies we use to solve problems.

In a new study, MIT researchers have successfully modeled how people deploy different decision-making strategies to solve a complicated task — in this case, predicting how a ball will travel through a maze when the ball is hidden from view. The human brain cannot perform this task perfectly because it is impossible to track all of the possible trajectories in parallel, but the researchers found that people can perform reasonably well by flexibly adopting two strategies known as hierarchical reasoning and counterfactual reasoning.

The researchers were also able to determine the circumstances under which people choose each of those strategies.

“What humans are capable of doing is to break down the maze into subsections, and then solve each step using relatively simple algorithms. Effectively, when we don’t have the means to solve a complex problem, we manage by using simpler heuristics that get the job done,” says Mehrdad Jazayeri, a professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, an investigator at the Howard Hughes Medical Institute, and the senior author of the study.

Mahdi Ramadan PhD ’24 and graduate student Cheng Tang are the lead authors of the paper, which appears today in Nature Human Behavior. Nicholas Watters PhD ’25 is also a co-author.

Rational strategies

When humans perform simple tasks that have a clear correct answer, such as categorizing objects, they perform extremely well. When tasks become more complex, such as planning a trip to your favorite cafe, there may no longer be one clearly superior answer. And, at each step, there are many things that could go wrong. In these cases, humans are very good at working out a solution that will get the task done, even though it may not be the optimal solution.

Those solutions often involve problem-solving shortcuts, or heuristics. Two prominent heuristics humans commonly rely on are hierarchical and counterfactual reasoning. Hierarchical reasoning is the process of breaking down a problem into layers, starting from the general and proceeding toward specifics. Counterfactual reasoning involves imagining what would have happened if you had made a different choice. While these strategies are well-known, scientists don’t know much about how the brain decides which one to use in a given situation.

“This is really a big question in cognitive science: How do we problem-solve in a suboptimal way, by coming up with clever heuristics that we chain together in a way that ends up getting us closer and closer until we solve the problem?” Jazayeri says.

To overcome this, Jazayeri and his colleagues devised a task that is just complex enough to require these strategies, yet simple enough that the outcomes and the calculations that go into them can be measured.

The task requires participants to predict the path of a ball as it moves through four possible trajectories in a maze. Once the ball enters the maze, people cannot see which path it travels. At two junctions in the maze, they hear an auditory cue when the ball reaches that point. Predicting the ball’s path is a task that is impossible for humans to solve with perfect accuracy.

“It requires four parallel simulations in your mind, and no human can do that. It’s analogous to having four conversations at a time,” Jazayeri says. “The task allows us to tap into this set of algorithms that the humans use, because you just can’t solve it optimally.”

The researchers recruited about 150 human volunteers to participate in the study. Before each subject began the ball-tracking task, the researchers evaluated how accurately they could estimate timespans of several hundred milliseconds, about the length of time it takes the ball to travel along one arm of the maze.

For each participant, the researchers created computational models that could predict the patterns of errors that would be seen for that participant (based on their timing skill) if they were running parallel simulations, using hierarchical reasoning alone, counterfactual reasoning alone, or combinations of the two reasoning strategies.

The researchers compared the subjects’ performance with the models’ predictions and found that for every subject, their performance was most closely associated with a model that used hierarchical reasoning but sometimes switched to counterfactual reasoning.

That suggests that instead of tracking all the possible paths that the ball could take, people broke up the task. First, they picked the direction (left or right), in which they thought the ball turned at the first junction, and continued to track the ball as it headed for the next turn. If the timing of the next sound they heard wasn’t compatible with the path they had chosen, they would go back and revise their first prediction — but only some of the time.

Switching back to the other side, which represents a shift to counterfactual reasoning, requires people to review their memory of the tones that they heard. However, it turns out that these memories are not always reliable, and the researchers found that people decided whether to go back or not based on how good they believed their memory to be.

“People rely on counterfactuals to the degree that it’s helpful,” Jazayeri says. “People who take a big performance loss when they do counterfactuals avoid doing them. But if you are someone who’s really good at retrieving information from the recent past, you may go back to the other side.”

Human limitations

To further validate their results, the researchers created a machine-learning neural network and trained it to complete the task. A machine-learning model trained on this task will track the ball’s path accurately and make the correct prediction every time, unless the researchers impose limitations on its performance.

When the researchers added cognitive limitations similar to those faced by humans, they found that the model altered its strategies. When they eliminated the model’s ability to follow all possible trajectories, it began to employ hierarchical and counterfactual strategies like humans do. If the researchers reduced the model’s memory recall ability, it began to switch to counterfactual only if it thought its recall would be good enough to get the right answer — just as humans do.

“What we found is that networks mimic human behavior when we impose on them those computational constraints that we found in human behavior,” Jazayeri says. “This is really saying that humans are acting rationally under the constraints that they have to function under.”

By slightly varying the amount of memory impairment programmed into the models, the researchers also saw hints that the switching of strategies appears to happen gradually, rather than at a distinct cut-off point. They are now performing further studies to try to determine what is happening in the brain as these shifts in strategy occur.

The research was funded by a Lisa K. Yang ICoN Fellowship, a Friends of the McGovern Institute Student Fellowship, a National Science Foundation Graduate Research Fellowship, the Simons Foundation, the Howard Hughes Medical Institute, and the McGovern Institute.


“Each of us holds a piece of the solution”

Campus gathers with Vice President for Energy and Climate Evelyn Wang to explore the Climate Project at MIT, make connections, and exchange ideas.


MIT has an unparalleled history of bringing together interdisciplinary teams to solve pressing problems — think of the development of radar during World War II, or leading the international coalition that cracked the code of the human genome — but the challenge of climate change could demand a scale of collaboration unlike any that’s come before at MIT.

“Solving climate change is not just about new technologies or better models. It’s about forging new partnerships across campus and beyond — between scientists and economists, between architects and data scientists, between policymakers and physicists, between anthropologists and engineers, and more,” MIT Vice President for Energy and Climate Evelyn Wang told an energetic crowd of faculty, students, and staff on May 6. “Each of us holds a piece of the solution — but only together can we see the whole.”

Undeterred by heavy rain, approximately 300 campus community members filled the atrium in the Tina and Hamid Moghadam Building (Building 55) for a spring gathering hosted by Wang and the Climate Project at MIT. The initiative seeks to direct the full strength of MIT to address climate change, which Wang described as one of the defining challenges of this moment in history — and one of its greatest opportunities.

“It calls on us to rethink how we power our world, how we build, how we live — and how we work together,” Wang said. “And there is no better place than MIT to lead this kind of bold, integrated effort. Our culture of curiosity, rigor, and relentless experimentation makes us uniquely suited to cross boundaries — to break down silos and build something new.”

The Climate Project is organized around six missions, thematic areas in which MIT aims to make significant impact, ranging from decarbonizing industry to new policy approaches to designing resilient cities. The faculty leaders of these missions posed challenges to the crowd before circulating among the crowd to share their perspectives and to discuss community questions and ideas.

Wang and the Climate Project team were joined by a number of research groups, startups, and MIT offices conducting relevant work today on issues related to energy and climate. For example, the MIT Office of Sustainability showcased efforts to use the MIT campus as a living laboratory; MIT spinouts such as Forma Systems, which is developing high-performance, low-carbon building systems, and Addis Energy, which envisions using the earth as a reactor to produce clean ammonia, presented their technologies; and visitors learned about current projects in MIT labs, including DebunkBot, an artificial intelligence-powered chatbot that can persuade people to shift their attitudes about conspiracies, developed by David Rand, the Erwin H. Schell Professor at the MIT Sloan School of Management.

Benedetto Marelli, an associate professor in the Department of Civil and Environmental Engineering who leads the Wild Cards Mission, said the energy and enthusiasm that filled the room was inspiring — but that the individual conversations were equally valuable.

“I was especially pleased to see so many students come out. I also spoke with other faculty, talked to staff from across the Institute, and met representatives of external companies interested in collaborating with MIT,” Marelli said. “You could see connections being made all around the room, which is exactly what we need as we build momentum for the Climate Project.”


Universal nanosensor unlocks the secrets to plant growth

Researchers from SMART DiSTAP developed the world’s first near-infrared fluorescent nanosensor capable of monitoring a plant’s primary growth hormone in real-time and without harming the plant.


Researchers from the Disruptive and Sustainable Technologies for Agricultural Precision (DiSTAP) interdisciplinary research group within the Singapore-MIT Alliance for Research and Technology have developed the world’s first near-infrared fluorescent nanosensor capable of real-time, nondestructive, and species-agnostic detection of indole-3-acetic acid (IAA) — the primary bioactive auxin hormone that controls the way plants develop, grow, and respond to stress.

Auxins, particularly IAA, play a central role in regulating key plant processes such as cell division, elongation, root and shoot development, and response to environmental cues like light, heat, and drought. External factors like light affect how auxin moves within the plant, temperature influences how much is produced, and a lack of water can disrupt hormone balance. When plants cannot effectively regulate auxins, they may not grow well, adapt to changing conditions, or produce as much food. 

Existing IAA detection methods, such as liquid chromatography, require taking plant samples from the plant — which harms or removes part of it. Conventional methods also measure the effects of IAA rather than detecting it directly, and cannot be used universally across different plant types. In addition, since IAA are small molecules that cannot be easily tracked in real time, biosensors that contain fluorescent proteins need to be inserted into the plant’s genome to measure auxin, making it emit a fluorescent signal for live imaging.

SMART’s newly developed nanosensor enables direct, real-time tracking of auxin levels in living plants with high precision. The sensor uses near infrared imaging to monitor IAA fluctuations non-invasively across tissues like leaves, roots, and cotyledons, and it is capable of bypassing chlorophyll interference to ensure highly reliable readings even in densely pigmented tissues. The technology does not require genetic modification and can be integrated with existing agricultural systems — offering a scalable precision tool to advance both crop optimization and fundamental plant physiology research. 

By providing real-time, precise measurements of auxin, the sensor empowers farmers with earlier and more accurate insights into plant health. With these insights and comprehensive data, farmers can make smarter, data-driven decisions on irrigation, nutrient delivery, and pruning, tailored to the plant’s actual needs — ultimately improving crop growth, boosting stress resilience, and increasing yields.

“We need new technologies to address the problems of food insecurity and climate change worldwide. Auxin is a central growth signal within living plants, and this work gives us a way to tap it to give new information to farmers and researchers,” says Michael Strano, co-lead principal investigator at DiSTAP, Carbon P. Dubbs Professor of Chemical Engineering at MIT, and co-corresponding author of the paper. “The applications are many, including early detection of plant stress, allowing for timely interventions to safeguard crops. For urban and indoor farms, where light, water, and nutrients are already tightly controlled, this sensor can be a valuable tool in fine-tuning growth conditions with even greater precision to optimize yield and sustainability.”

The research team documented the nanosensor’s development in a paper titled, “A Near-Infrared Fluorescent Nanosensor for Direct and Real-Time Measurement of Indole-3-Acetic Acid in Plants,” published in the journal ACS Nano. The sensor comprises single-walled carbon nanotubes wrapped in a specially designed polymer, which enables it to detect IAA through changes in near infrared fluorescence intensity. Successfully tested across multiple species, including ArabidopsisNicotiana benthamiana, choy sum, and spinach, the nanosensor can map IAA responses under various environmental conditions such as shade, low light, and heat stress. 

“This sensor builds on DiSTAP’s ongoing work in nanotechnology and the CoPhMoRe technique, which has already been used to develop other sensors that can detect important plant compounds such as gibberellins and hydrogen peroxide. By adapting this approach for IAA, we’re adding to our inventory of novel, precise, and nondestructive tools for monitoring plant health. Eventually, these sensors can be multiplexed, or combined, to monitor a spectrum of plant growth markers for more complete insights into plant physiology,” says Duc Thinh Khong, research scientist at DiSTAP and co-first author of the paper.

“This small but mighty nanosensor tackles a long-standing challenge in agriculture: the need for a universal, real-time, and noninvasive tool to monitor plant health across various species. Our collaborative achievement not only empowers researchers and farmers to optimize growth conditions and improve crop yield and resilience, but also advances our scientific understanding of hormone pathways and plant-environment interactions,” says In-Cheol Jang, senior principal investigator at TLL, principal investigator at DiSTAP, and co-corresponding author of the paper.

Looking ahead, the research team is looking to combine multiple sensing platforms to simultaneously detect IAA and its related metabolites to create a comprehensive hormone signaling profile, offering deeper insights into plant stress responses and enhancing precision agriculture. They are also working on using microneedles for highly localized, tissue-specific sensing, and collaborating with industrial urban farming partners to translate the technology into practical, field-ready solutions. 

The research was carried out by SMART, and supported by the National Research Foundation of Singapore under its Campus for Research Excellence And Technological Enterprise program. The universal nanosensor was developed in collaboration with Temasek Life Sciences Laboratory (TLL) and MIT.


Envisioning a future where health care tech leaves some behind

The winning essay of the Envisioning the Future of Computing Prize puts health care disparities at the forefront.


Will the perfect storm of potentially life-changing, artificial intelligence-driven health care and the desire to increase profits through subscription models alienate vulnerable patients?

For the third year in a row, MIT's Envisioning the Future of Computing Prize asked students to describe, in 3,000 words or fewer, how advancements in computing could shape human society for the better or worse. All entries were eligible to win a number of cash prizes.
 
Inspired by recent research on the greater effect microbiomes have on overall health, MIT-WHOI Joint Program in Oceanography and Applied Ocean Science and Engineering PhD candidate Annaliese Meyer created the concept of “B-Bots,” a synthetic bacterial mimic designed to regulate gut biomes and activated by Bluetooth.  
 
For the contest, which challenges MIT students to articulate their musings for what a future driven by advances in computing holds, Meyer submitted a work of speculative fiction about how recipients of a revolutionary new health-care technology find their treatment in jeopardy with the introduction of a subscription-based pay model.

In her winning paper, titled “(Pre/Sub)scribe,” Meyer chronicles the usage of B-Bots from the perspective of both their creator and a B-Bots user named Briar. They celebrate the effects of the supplement, helping them manage vitamin deficiencies and chronic conditions like acid reflux and irritable bowel syndrome. Meyer says that the introduction of a B-Bots subscription model “seemed like a perfect opportunity to hopefully make clear that in a for-profit health-care system, even medical advances that would, in theory, be revolutionary for human health can end up causing more harm than good for the many people on the losing side of the massive wealth disparity in modern society.” Meyer also states that these opinions are her own and do not reflect any official stances of affiliated institutions.

As a Canadian, Meyer has experienced the differences between the health care systems in the United States and Canada. She recounts her mother’s recent cancer treatments, emphasizing the cost and coverage of treatments in British Columbia when compared to the U.S.

Aside from a cautionary tale of equity in the American health care system, Meyer hopes readers take away an additional scientific message on the complexity of gut microbiomes. Inspired by her thesis work in ocean metaproteomics, Meyer says, “I think a lot about when and why microbes produce different proteins to adapt to environmental changes, and how that depends on the rest of the microbial community and the exchange of metabolic products between organisms.”

Meyer had hoped to participate in the previous year’s contest, but the time constraints of her lab work put her submission on hold. Now in the midst of thesis work, she saw the contest as a way to add some variety to what she was writing while keeping engaged with her scientific interests. However, writing has always been a passion. “I wrote a lot as a kid (‘author’ actually often preceded ‘scientist’ as my dream job while I was in elementary school), and I still write fiction in my spare time,” she says.

Named the winner of the $10,000 grand prize, Meyer says the essay and presentation preparation were extremely rewarding.

“The chance to explore a new topic area which, though related to my field, was definitely out of my comfort zone, really pushed me as a writer and a scientist. It got me reading papers I’d never have found before, and digging into concepts that I’d barely ever encountered. (Did I have any real understanding of the patent process prior to this? Absolutely not.) The presentation dinner itself was a ton of fun; it was great to both be able to celebrate with my friends and colleagues as well as meet people from a bunch of different fields and departments around MIT.”

Envisioning the future of the computing prize

Co-sponsored by the Social and Ethical Responsibilities of Computing (SERC), a cross-cutting initiative of the MIT Schwarzman College of Computing and the School of Humanities, Arts, and Social Sciences (SHASS), with support from MAC3 Philanthropies, the contest this year attracted 65 submissions from undergraduate and graduate students across various majors, including brain and cognitive sciences, economics, electrical engineering and computer science, physics, anthropology, and others.

Caspar Hare, associate dean of SERC and professor of philosophy, launched the prize in 2023. He says that the object of the prize was “to encourage MIT students to think about what they’re doing, not just in terms of advancing computing-related technologies, but also in terms of how the decisions they make may or may not work to our collective benefit.”

He emphasized that the Envisioning the Future of Computing prize will continue to remain “interesting and important” to the MIT community. There are plans in place to tweak next year’s contest, offering more opportunities for workshops and guidance for those interested in submitting essays.

“Everyone is excited to continue this for as long as it remains relevant, which could be forever,” he says, suggesting that in years to come the prize could give us a series of historical snapshots of what computing-related technologies MIT students found most compelling.

“Computing-related technology is going to be transforming and changing the world. MIT students will remain a big part of that.”

Crowning a winner

As part of a two-stage evaluation process, all the submitted essays were reviewed anonymously by a committee of faculty members from the college, SHASS, and the Department of Urban Studies and Planning. The judges moved forward three finalists based on the papers that were deemed to be the most articulate, thorough, grounded, imaginative, and inspiring.
 
In early May, a live awards ceremony was held where the finalists were invited to give 20-minute presentations on their entries and took questions from the audience. Nearly 140 MIT community members, family members, and friends attended the ceremony in support of the finalists. The audience members and judging panel asked the presenters challenging and thoughtful questions on the societal impact of their fictional computing technologies.
 
A final tally, which comprised 75 percent of their essay score and 25 percent of their presentation score, determined the winner.

This year’s judging panel included:

The judges also awarded $5,000 to the two runners-up: Martin Staadecker, a graduate student in the Technology and Policy Program in the Institute for Data, Systems, and Society, for his essay on a fictional token-based system to track fossil fuels, and Juan Santoyo, a PhD candidate in the Department of Brain and Cognitive Sciences, for his short story of a field-deployed AI designed to help the mental health of soldiers in times of conflict. In addition, eight honorable mentions were recognized, with each receiving a cash prize of $1,000.


New facility to accelerate materials solutions for fusion energy

MIT Plasma Science and Fusion Center to establish the Schmidt Laboratory for Materials in Nuclear Technologies.


Fusion energy has the potential to enable the energy transition from fossil fuels, enhance domestic energy security, and power artificial intelligence. Private companies have already invested more than $8 billion to develop commercial fusion and seize the opportunities it offers. An urgent challenge, however, is the discovery and evaluation of cost-effective materials that can withstand extreme conditions for extended periods, including 150-million-degree plasmas and intense particle bombardment.

To meet this challenge, MIT’s Plasma Science and Fusion Center (PSFC) has launched the Schmidt Laboratory for Materials in Nuclear Technologies, or LMNT (pronounced “element”). Backed by a philanthropic consortium led by Eric and Wendy Schmidt, LMNT is designed to speed up the discovery and selection of materials for a variety of fusion power plant components. 

By drawing on MIT's expertise in fusion and materials science, repurposing existing research infrastructure, and tapping into its close collaborations with leading private fusion companies, the PSFC aims to drive rapid progress in the materials that are necessary for commercializing fusion energy on rapid timescales. LMNT will also help develop and assess materials for nuclear power plants, next-generation particle physics experiments, and other science and industry applications.

Zachary Hartwig, head of LMNT and an associate professor in the Department of Nuclear Science and Engineering (NSE), says, “We need technologies today that will rapidly develop and test materials to support the commercialization of fusion energy. LMNT’s mission includes discovery science but seeks to go further, ultimately helping select the materials that will be used to build fusion power plants in the coming years.”

A different approach to fusion materials

For decades, researchers have worked to understand how materials behave under fusion conditions using methods like exposing test specimens to low-energy particle beams, or placing them in the core of nuclear fission reactors. These approaches, however, have significant limitations. Low-energy particle beams only irradiate the thinnest surface layer of materials, while fission reactor irradiation doesn’t accurately replicate the mechanism by which fusion damages materials. Fission irradiation is also an expensive, multiyear process that requires specialized facilities.

To overcome these obstacles, researchers at MIT and peer institutions are exploring the use of energetic beams of protons to simulate the damage materials undergo in fusion environments. Proton beams can be tuned to match the damage expected in fusion power plants, and protons penetrate deep enough into test samples to provide insights into how exposure can affect structural integrity. They also offer the advantage of speed: first, intense proton beams can rapidly damage dozens of material samples at once, allowing researchers to test them in days, rather than years. Second, high-energy proton beams can be generated with a type of particle accelerator known as a cyclotron commonly used in the health-care industry. As a result, LMNT will be built around a cost-effective, off-the-shelf cyclotron that is easy to obtain and highly reliable.

LMNT will surround its cyclotron with four experimental areas dedicated to materials science research. The lab is taking shape inside the large shielded concrete vault at PSFC that once housed the Alcator C-Mod tokamak, a record-setting fusion experiment that ran at the PSFC from 1992 to 2016. By repurposing C-Mod’s former space, the center is skipping the need for extensive, costly new construction and accelerating the research timeline significantly. The PSFC’s veteran team — who have led major projects like the Alcator tokamaks and advanced high-temperature superconducting magnet development — are overseeing the facilities design, construction, and operation, ensuring LMNT moves quickly from concept to reality. The PSFC expects to receive the cyclotron by the end of 2025, with experimental operations starting in early 2026.

“LMNT is the start of a new era of fusion research at MIT, one where we seek to tackle the most complex fusion technology challenges on timescales commensurate with the urgency of the problem we face: the energy transition,” says Nuno Loureiro, director of the PSFC, a professor of nuclear science and engineering, and the Herman Feshbach Professor of Physics. “It’s ambitious, bold, and critical — and that’s exactly why we do it.”

“What’s exciting about this project is that it aligns the resources we have today — substantial research infrastructure, off-the-shelf technologies, and MIT expertise — to address the key resource we lack in tackling climate change: time. Using the Schmidt Laboratory for Materials in Nuclear Technologies, MIT researchers advancing fusion energy, nuclear power, and other technologies critical to the future of energy will be able to act now and move fast,” says Elsa Olivetti, the Jerry McAfee Professor in Engineering and a mission director of MIT’s Climate Project.

In addition to advancing research, LMNT will provide a platform for educating and training students in the increasingly important areas of fusion technology. LMNT’s location on MIT’s main campus gives students the opportunity to lead research projects and help manage facility operations. It also continues the hands-on approach to education that has defined the PSFC, reinforcing that direct experience in large-scale research is the best approach to create fusion scientists and engineers for the expanding fusion industry workforce.

Benoit Forget, head of NSE and the Korea Electric Power Professor of Nuclear Engineering, notes, “This new laboratory will give nuclear science and engineering students access to a unique research capability that will help shape the future of both fusion and fission energy.”

Accelerating progress on big challenges

Philanthropic support has helped LMNT leverage existing infrastructure and expertise to move from concept to facility in just one-and-a-half years — a fast timeline for establishing a major research project.

“I’m just as excited about this research model as I am about the materials science. It shows how focused philanthropy and MIT’s strengths can come together to build something that’s transformational — a major new facility that helps researchers from the public and private sectors move fast on fusion materials,” emphasizes Hartwig.

By utilizing this approach, the PSFC is executing a major public-private partnership in fusion energy, realizing a research model that the U.S. fusion community has only recently started to explore, and demonstrating the crucial role that universities can play in the acceleration of the materials and technology required for fusion energy.

“Universities have long been at the forefront of tackling society’s biggest challenges, and the race to identify new forms of energy and address climate change demands bold, high-risk, high-reward approaches,” says Ian Waitz, MIT’s vice president for research. “LMNT is helping turn fusion energy from a long-term ambition into a near-term reality.”


How the brain distinguishes between ambiguous hypotheses

Neural activity patterns can encode competing hypotheses about which landmark will lead to the correct destination.


When navigating a place that we’re only somewhat familiar with, we often rely on unique landmarks to help make our way. However, if we’re looking for an office in a brick building, and there are many brick buildings along our route, we might use a rule like looking for the second building on a street, rather than relying on distinguishing the building itself.

Until that ambiguity is resolved, we must hold in mind that there are multiple possibilities (or hypotheses) for where we are in relation to our destination. In a study of mice, MIT neuroscientists have now discovered that these hypotheses are explicitly represented in the brain by distinct neural activity patterns.

This is the first time that neural activity patterns that encode simultaneous hypotheses have been seen in the brain. The researchers found that these representations, which were observed in the brain’s retrosplenial cortex (RSC), not only encode hypotheses but also could be used by the animals to choose the correct way to go.

“As far as we know, no one has shown in a complex reasoning task that there’s an area in association cortex that holds two hypotheses in mind and then uses one of those hypotheses, once it gets more information, to actually complete the task,” says Mark Harnett, an associate professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Jakob Voigts PhD ’17, a former postdoc in Harnett’s lab and now a group leader at the Howard Hughes Medical Institute Janelia Research Campus, is the lead author of the paper, which appears today in Nature Neuroscience.

Ambiguous landmarks

The RSC receives input from the visual cortex, the hippocampal formation, and the anterior thalamus, which it integrates to help guide navigation.

In a 2020 paper, Harnett’s lab found that the RSC uses both visual and spatial information to encode landmarks used for navigation. In that study, the researchers showed that neurons in the RSC of mice integrate visual information about the surrounding environment with spatial feedback of the mice’s own position along a track, allowing them to learn where to find a reward based on landmarks that they saw.

In their new study, the researchers wanted to delve further into how the RSC uses spatial information and situational context to guide navigational decision-making. To do that, the researchers devised a much more complicated navigational task than typically used in mouse studies. They set up a large, round arena, with 16 small openings, or ports, along the side walls. One of these openings would give the mice a reward when they stuck their nose through it. In the first set of experiments, the researchers trained the mice to go to different reward ports indicated by dots of light on the floor that were only visible when the mice get close to them.

Once the mice learned to perform this relatively simple task, the researchers added a second dot. The two dots were always the same distance from each other and from the center of the arena. But now the mice had to go to the port by the counterclockwise dot to get the reward. Because the dots were identical and only became visible at close distances, the mice could never see both dots at once and could not immediately determine which dot was which.

To solve this task, mice therefore had to remember where they expected a dot to show up, integrating their own body position, the direction they were heading, and path they took to figure out which landmark is which. By measuring RSC activity as the mice approached the ambiguous landmarks, the researchers could determine whether the RSC encodes hypotheses about spatial location. The task was carefully designed to require the mice to use the visual landmarks to obtain rewards, instead of other strategies like odor cues or dead reckoning.

“What is important about the behavior in this case is that mice need to remember something and then use that to interpret future input,” says Voigts, who worked on this study while a postdoc in Harnett’s lab. “It’s not just remembering something, but remembering it in such a way that you can act on it.”

The researchers found that as the mice accumulated information about which dot might be which, populations of RSC neurons displayed distinct activity patterns for incomplete information. Each of these patterns appears to correspond to a hypothesis about where the mouse thought it was with respect to the reward.

When the mice get close enough to figure out which dot was indicating the reward port, these patterns collapsed into the one that represents the correct hypothesis. The findings suggest that these patterns not only passively store hypotheses, they can also be used to compute how to get to the correct location, the researchers say.

“We show that RSC has the required information for using this short-term memory to distinguish the ambiguous landmarks. And we show that this type of hypothesis is encoded and processed in a way that allows the RSC to use it to solve the computation,” Voigts says.

Interconnected neurons

When analyzing their initial results, Harnett and Voigts consulted with MIT Professor Ila Fiete, who had run a study about 10 years ago using an artificial neural network to perform a similar navigation task.

That study, previously published on bioRxiv, showed that the neural network displayed activity patterns that were conceptually similar to those seen in the animal studies run by Harnett’s lab. The neurons of the artificial neural network ended up forming highly interconnected low-dimensional networks, like the neurons of the RSC.

“That interconnectivity seems, in ways that we still don’t understand, to be key to how these dynamics emerge and how they’re controlled. And it’s a key feature of how the RSC holds these two hypotheses in mind at the same time,” Harnett says.

In his lab at Janelia, Voigts now plans to investigate how other brain areas involved in navigation, such as the prefrontal cortex, are engaged as mice explore and forage in a more naturalistic way, without being trained on a specific task.

“We’re looking into whether there are general principles by which tasks are learned,” Voigts says. “We have a lot of knowledge in neuroscience about how brains operate once the animal has learned a task, but in comparison we know extremely little about how mice learn tasks or what they choose to learn when given freedom to behave naturally.”

The research was funded, in part, by the National Institutes of Health, a Simons Center for the Social Brain at MIT postdoctoral fellowship, the National Institute of General Medical Sciences, and the Center for Brains, Minds, and Machines at MIT, funded by the National Science Foundation.


Former MIT researchers advance a new model for innovation

Focused research organizations (FROs) undertake large research efforts and have begun to yield scientific advances.


Academic research groups and startups are essential drivers of scientific progress. But some projects, like the Hubble Space Telescope or the Human Genome Project, are too big for any one academic lab or loose consortium. They’re also not immediately profitable enough for industry to take on.

That’s the gap researchers at MIT were trying to fill when they created the concept of focused research organizations, or FROs. They describe a FRO as a new type of entity, often philanthropically funded, that undertakes large research efforts using tightly coordinated teams to create a public good that accelerates scientific progress.

The original idea for focused research organizations came out of talks among researchers, most of whom were working to map the brain in MIT Professor Ed Boyden’s lab. After they began publishing their ideas, however, the researchers realized FROs could be a powerful tool to unlock scientific advances across many other applications.

“We were quite pleasantly surprised by the range of fields where we see FRO-shaped problems,” says Adam Marblestone, a former MIT research scientist who co-founded the nonprofit Convergent Research to help launch FROs in 2021. “Convergent has FRO proposals from climate, materials science, chemistry, biology — we even have launched a FRO on software for math. You wouldn’t expect math to be something with a large-scale technological research bottleneck, but it turns out even there, we found a software engineering bottleneck that needed to be solved.”

Marblestone helped formulate the idea for focused research organizations at MIT with a group including Andrew Payne SM ’17, PhD ’21 and Sam Rodriques PhD ’19, who were PhD students in Boyden’s lab at the time. Since then, the FRO concept has caught on. Convergent has helped attract philanthropic funding for FROs working to decode the immune system, identify the unintended targets of approved drugs, and understand the impacts of carbon dioxide removal in our oceans.

In total, Convergent has supported the creation of 10 FROs since its founding in 2021. Many of those groups have already released important tools for better understanding our world — and their leaders believe the best is yet to come.

“We’re starting to see these first open-source tools released in important areas,” Marblestone says. “We’re seeing the first concrete evidence that FROs are effective, because no other entity could have released these tools, and I think 2025 is going to be a significant year in terms of our newer FROs putting out new datasets and tools.”

A new model

Marblestone joined Boyden’s lab in 2014 as a research scientist after completing his PhD at Harvard University. He also worked in a new position called director of scientific architecting at the MIT Media Lab, which Boyden helped create, through which he tried to organize individual research efforts into larger projects. His own research focused on overcoming the challenges of measuring brain activity across large scales.

Marblestone discussed this and other large-scale neuroscience problems with Payne and Rodriques, and the researchers began thinking about gaps in scientific funding more broadly.

“The combination of myself, Sam, Andrew, Ed, and others’ experiences trying to start various large brain-mapping projects convinced us of the gap in support for medium-sized science and engineering teams with startup-inspired structures, built for the nonprofit purpose of building scientific infrastructure,” Marblestone says.

Through MIT, the researchers also connected with Tom Kalil, who was at the time chief innovation officer at Schmidt Futures, a philanthropic initiative of Eric and Wendy Schmidt. Rodriques wrote about the concept of a focused research organization as the last chapter of his PhD thesis in 2019.

“Ed always encouraged us to dream very, very big,” Rodriques says. “We were always trying to think about the hardest problems in biology and how to tackle them. My thesis basically ended with me explaining why we needed a new structure that is like a company, but nonprofit and dedicated to science.”

As part of a fellowship with the Federation of American Scientists in 2020, and working with Kalil, Marblestone interviewed scientists in dozens of fields outside of neuroscience and learned that the funding gap existed across disciplines.

When Rodriques and Marblestone published an essay about their findings, it helped attract philanthropic funding, which Marblestone, Kalil, and co-founder Anastasia Gamick used to launch Convergent Research, a nonprofit science studio for launching FROs.

“I see Ed’s lab as a melting pot where myself, Ed, Sam, and others worked on articulating a need and identifying specific projects that might make sense as FROs,” Marblestone says. “All those ideas later got crystallized when we created Convergent Research.”

In 2021, Convergent helped launch the first FROs: E11 Bio, which is led by Payne and committed to developing tools to understand how the brain is wired, and Cultivarium, a FRO making microorganisms more accessible for work in synthetic biology.

“From our brain mapping work we started asking the question, ‘Are there other projects that look like this that aren’t getting funded?’” Payne says. “We realized there was a gap in the research ecosystem, where some of these interdisciplinary, team science projects were being systematically overlooked. We knew a lot of amazing things would come out of getting those projects funded.”

Tools to advance science

Early progress from the first focused research organizations has strengthened Marblestone’s conviction that they’re filling a gap.

[C]Worthy is the FRO building tools to ensure safe, ocean-based carbon dioxide removal. It recently released an interactive map of alkaline activity to improve our understanding of one method for sequestering carbon known as ocean alkalinity enhancement. Last year, a math FRO, Lean, released a programming language and proof assistant that was used by Google’s DeepMind AI lab to solve problems in the International Mathematical Olympiad, achieving the same level as a silver medalist in the competition for the first time. The synthetic biology FRO Cultivarium, in turn, has already released software that can predict growth conditions for microbes based on their genome.

Last year, E11 Bio previewed a new method for mapping the brain called PRISM, which it has used to map out a portion of the mouse hippocampus. It will be making the data and mapping tool available to all researchers in coming months.

“A lot of this early work has proven you can put a really talented team together and move fast to go from zero to one,” Payne says. “The next phase is proving FROs can continue to build on that momentum and develop even more datasets and tools, establish even bigger collaborations, and scale their impact.”

Payne credits Boyden for fostering an ecosystem where researchers could think about problems beyond their narrow area of study.

“Ed’s lab was a really intellectually stimulating, collaborative environment,” Payne says. “He trains his students to think about impact first and work backward. It was a bunch of people thinking about how they were going to change the world, and that made it a particularly good place to develop the FRO idea.”

Marblestone says supporting FROs has been the highest-impact thing he’s been able to do in his career. Still, he believes the success of FROs should be judged over closer to 10-year periods and will depend on not just the tools they produce but also whether they spin out companies, partner with other institutes, and create larger, long-lasting initiatives to deploy what they built.

“We were initially worried people wouldn’t be willing to join these organizations because it doesn’t offer tenure and it doesn’t offer equity in a startup,” Marblestone says. “But we’ve been able to recruit excellent leaders, scientists, engineers, and others to create highly motivated teams. That’s good evidence this is working. As we get strong projects and good results, I hope it will create this flywheel where it becomes easier to fund these ideas, more scientists will come up with them, and I think we’re starting to get there.”


Different anesthetics, same result: unconsciousness by shifting brainwave phase

MIT study finds an easily measurable brain wave shift may be a universal marker of unconsciousness under anesthesia.


At the level of molecules and cells, ketamine and dexmedetomidine work very differently, but in the operating room they do the same exact thing: anesthetize the patient. By demonstrating how these distinct drugs achieve the same result, a new study in animals by neuroscientists at The Picower Institute for Learning and Memory at MIT identifies a potential signature of unconsciousness that is readily measurable to improve anesthesiology care.

What the two drugs have in common, the researchers discovered, is the way they push around brain waves, which are produced by the collective electrical activity of neurons. When brain waves are in phase, meaning the peaks and valleys of the waves are aligned, local groups of neurons in the brain’s cortex can share information to produce conscious cognitive functions such as attention, perception, and reasoning, says Picower Professor Earl K. Miller, senior author of the new study in Cell Reports. When brain waves fall out of phase, local communications, and therefore functions, fall apart, producing unconsciousness.

The finding, led by graduate student Alexandra Bardon, not only adds to scientists’ understanding of the dividing line between consciousness and unconsciousness, Miller says, but also could provide a common new measure for anesthesiologists who use a variety of different anesthetics to maintain patients on the proper side of that line during surgery.

“If you look at the way phase is shifted in our recordings, you can barely tell which drug it was,” says Miller, a faculty member in the Picower Institute and MIT’s Department of Brain and Cognitive Sciences. “That’s valuable for medical practice. Plus if unconsciousness has a universal signature, it could also reveal the mechanisms that generate consciousness.”

If more anesthetic drugs are also shown to affect phase in the same way, then anesthesiologists might be able to use brain wave phase alignment as a reliable marker of unconsciousness as they titrate doses of anesthetic drugs, Miller says, regardless of which particular mix of drugs they are using. That insight could aid efforts to build closed-loop systems that can aid anesthesiologists by constantly adjusting drug dose based on brain wave measurements of the patient’s unconsciousness.

Miller has been collaborating with study co-author Emery N. Brown, an anesthesiologist and Edward Hood Taplin Professor of Computational Neuroscience and Medical Engineering in the Picower Institute, on building such a system. In a recent clinical trial with colleagues in Japan, Brown demonstrated that monitoring brain wave power signals using EEG enabled an anesthesiologist to use much less sevoflurane during surgery with young children. The reduced doses proved safe and were associated with many improved clinical outcomes, including a reduced incidence of post-operative delirium.

Phase findings

Neuroscientists studying anesthesia have rarely paid attention to phase, but in the new study, Bardon, Brown, and Miller’s team made a point of it as they anesthetized two animals.

After the animals lost consciousness, the measurements indicated a substantial increase in “phase locking,” especially at low frequencies. Phase locking means that the relative differences in phase remained more stable. But what caught the researchers’ attention were the differences that became locked in: within each hemisphere, regardless of which anesthetic they used, brain wave phase became misaligned between the dorsolateral and ventrolateral regions of the prefrontal cortex.

Surprisingly, brain wave phase across hemispheres became more aligned, not less. But Miller notes that case is still a big shift from the conscious state, in which brain hemispheres are typically not aligned well, so the finding is a further indication that major changes of phase alignment, albeit in different ways at different distances, are a correlate of unconsciousness compared to wakefulness.

“The increase in interhemispheric alignment of activity by anesthetics seems to reverse the pattern observed in the awake, cognitively engaged brain,” the Bardon and Miller team wrote in Cell Reports.

Determined by distance

Distance proved to be a major factor in determining the change in phase alignment. Even across the 2.5 millimeters of a single electrode array, low-frequency waves moved 20-30 degrees out of alignment. Across the 20 or so millimeters between arrays in the upper (dorsolateral) and lower (ventrolateral) regions within a hemisphere, that would mean a roughly 180-degree shift in phase alignment, which is a complete offset of the waves.

The dependence on distance is consistent with the idea of waves traveling across the cortex, Miller says. Indeed, in a 2022 study, Miller and Brown’s labs showed that the anesthetic propofol induced a powerful low-frequency traveling wave that swept straight across the cortex, overwhelming higher-frequency straight and rotating waves.

The new results raise many opportunities for follow-up studies, Miller says. Does propofol also produce this signature of changed phase alignment? What role do traveling waves play in the phenomenon? And given that sleep is also characterized by increased power in slow wave frequencies, but is definitely not the same state as anesthesia-induced unconsciousness, could phase alignment explain the difference?

In addition to Bardon, Brown, and Miller, the paper’s other authors are Jesus Ballesteros, Scott Brincat, Jefferson Roy, Meredith Mahnke, and Yumiko Ishizawa.

The U.S. Department of Energy, the National Institutes of Health, the Simons Center for the Social Brain, the Freedom Together Foundation, and the Picower Institute provided support for the research.


Physicists observe a new form of magnetism for the first time

The magnetic state offers a new route to “spintronic” memory devices that would be faster and more efficient than their electronic counterparts.


MIT physicists have demonstrated a new form of magnetism that could one day be harnessed to build faster, denser, and less power-hungry “spintronic” memory chips.

The new magnetic state is a mash-up of two main forms of magnetism: the ferromagnetism of everyday fridge magnets and compass needles, and antiferromagnetism, in which materials have magnetic properties at the microscale yet are not macroscopically magnetized.

Now, the MIT team has demonstrated a new form of magnetism, termed “p-wave magnetism.”

Physicists have long observed that electrons of atoms in regular ferromagnets share the same orientation of “spin,” like so many tiny compasses pointing in the same direction. This spin alignment generates a magnetic field, which gives a ferromagnet its inherent magnetism. Electrons belonging to magnetic atoms in an antiferromagnet also have spin, although these spins alternate, with electrons orbiting neighboring atoms aligning their spins antiparallel to each other. Taken together, the equal and opposite spins cancel out, and the antiferromagnet does not exhibit macroscopic magnetization.

The team discovered the new p-wave magnetism in nickel iodide (NiI2), a two-dimensional crystalline material that they synthesized in the lab. Like a ferromagnet, the electrons exhibit a preferred spin orientation, and, like an antiferromagnet, equal populations of opposite spins result in a net cancellation. However, the spins on the nickel atoms exhibit a unique pattern, forming spiral-like configurations within the material that are mirror-images of each other, much like the left hand is the right hand’s mirror image.

What’s more, the researchers found this spiral spin configuration enabled them to carry out “spin switching”: Depending on the direction of spiraling spins in the material, they could apply a small electric field in a related direction to easily flip a left-handed spiral of spins into a right-handed spiral of spins, and vice-versa.

The ability to switch electron spins is at the heart of “spintronics,” which is a proposed alternative to conventional electronics. With this approach, data can be written in the form of an electron’s spin, rather than its electronic charge, potentially allowing orders of magnitude more data to be packed onto a device while using far less power to write and read that data.   

“We showed that this new form of magnetism can be manipulated electrically,” says Qian Song, a research scientist in MIT’s Materials Research Laboratory. “This breakthrough paves the way for a new class of ultrafast, compact, energy-efficient, and nonvolatile magnetic memory devices.”

Song and his colleagues published their results May 28 in the journal Nature. MIT co-authors include Connor Occhialini, Batyr Ilyas, Emre Ergeçen, Nuh Gedik, and Riccardo Comin, along with Rafael Fernandes at the University of Illinois Urbana-Champaign, and collaborators from multiple other institutions.

Connecting the dots

The discovery expands on work by Comin’s group in 2022. At that time, the team probed the magnetic properties of the same material, nickel iodide. At the microscopic level, nickel iodide resembles a triangular lattice of nickel and iodine atoms. Nickel is the material’s main magnetic ingredient, as the electrons on the nickel atoms exhibit spin, while those on iodine atoms do not.

In those experiments, the team observed that the spins of those nickel atoms were arranged in a spiral pattern throughout the material’s lattice, and that this pattern could spiral in two different orientations.

At the time, Comin had no idea that this unique pattern of atomic spins could enable precise switching of spins in surrounding electrons. This possibility was later raised by collaborator Rafael Fernandes, who along with other theorists was intrigued by a recently proposed idea for a new, unconventional, “p-wave” magnet, in which electrons moving along opposite directions in the material would have their spins aligned in opposite directions.

Fernandes and his colleagues recognized that if the spins of atoms in a material form the geometric spiral arrangement that Comin observed in nickel iodide, that would be a realization of a “p-wave” magnet. Then, when an electric field is applied to switch the “handedness” of the spiral, it should also switch the spin alignment of the electrons traveling along the same direction.

In other words, such a p-wave magnet could enable simple and controllable switching of electron spins, in a way that could be harnessed for spintronic applications.

“It was a completely new idea at the time, and we decided to test it experimentally because we realized nickel iodide was a good candidate to show this kind of p-wave magnet effect,” Comin says.

Spin current

For their new study, the team synthesized single-crystal flakes of nickel iodide by first depositing powders of the respective elements on a crystalline substrate, which they placed in a high-temperature furnace. The process causes the elements to settle into layers, each arranged microscopically in a triangular lattice of nickel and iodine atoms.

“What comes out of the oven are samples that are several millimeters wide and thin, like cracker bread,” Comin says. “We then exfoliate the material, peeling off even smaller flakes, each several microns wide, and a few tens of nanometers thin.”

The researchers wanted to know if, indeed, the spiral geometry of the nickel atoms’s spins would force electrons traveling in opposite directions to have opposite spins, like what Fernandes expected a p-wave magnet should exhibit. To observe this, the group applied to each flake a beam of circularly polarized light — light that produces an electric field that rotates in a particular direction, for instance, either clockwise or counterclockwise.

They reasoned that if travelling electrons interacting with the spin spirals have a spin that is aligned in the same direction, then incoming light, polarized in that same direction, should resonate and produce a characteristic signal. Such a signal would confirm that the traveling electrons’ spins align because of the spiral configuration, and furthermore, that the material does in fact exhibit p-wave magnetism.

And indeed, that’s what the group found. In experiments with multiple nickel iodide flakes, the researchers directly observed that the direction of the electron’s spin was correlated to the handedness of the light used to excite those electrons. Such is a telltale signature of p-wave magnetism, here observed for the first time.

Going a step further, they looked to see whether they could switch the spins of the electrons by applying an electric field, or a small amount of voltage, along different directions through the material. They found that when the direction of the electric field was in line with the direction of the spin spiral, the effect switched electrons along the route to spin in the same direction, producing a current of like-spinning electrons.

“With such a current of spin, you can do interesting things at the device level, for instance, you could flip magnetic domains that can be used for control of a magnetic bit,” Comin explains. “These spintronic effects are more efficient than conventional electronics because you’re just moving spins around, rather than moving charges. That means you’re not subject to any dissipation effects that generate heat, which is essentially the reason computers heat up.”

“We just need a small electric field to control this magnetic switching,” Song adds. “P-wave magnets could save five orders of magnitude of energy. Which is huge.”

“We are excited to see these cutting-edge experiments confirm our prediction of p-wave spin polarized states,” says Libor Šmejkal, head of the Max Planck Research Group in Dresden, Germany, who is one of the authors of the theoretical work that proposed the concept of p-wave magnetism but was not involved in the new paper. “The demonstration of electrically switchable p-wave spin polarization also highlights the promising applications of unconventional magnetic states.”

The team observed p-wave magnetism in nickel iodide flakes, only at ultracold temperatures of about 60 kelvins.

“That’s below liquid nitrogen, which is not necessarily practical for applications,” Comin says. “But now that we’ve realized this new state of magnetism, the next frontier is finding a material with these properties, at room temperature. Then we can apply this to a spintronic device.”

This research was supported, in part, by the National Science Foundation, the Department of Energy, and the Air Force Office of Scientific Research.


MIT students and postdoc explore the inner workings of Capitol Hill

In an annual tradition, MIT affiliates embarked on a trip to Washington to explore federal lawmaking and advocate for science policy.


This spring, 25 MIT students and a postdoc traveled to Washington, where they met with congressional offices to advocate for federal science funding and specific, science-based policies based on insights from their research on pressing issues — including artificial intelligence, health, climate and ocean science, energy, and industrial decarbonization. Organized annually by the Science Policy Initiative (SPI), this year’s trip came at a particularly critical moment, as science agencies are facing unprecedented funding cuts.

Over the course of two days, the group met with 66 congressional offices across 35 states and select committees, advocating for stable funding for science agencies such as the Department of Energy, the National Oceanic and Atmospheric Administration, the National Science Foundation, NASA, and the Department of Defense.

Congressional Visit Days (CVD), organized by SPI, offer students and researchers a hands-on introduction to federal policymaking. In addition to meetings on Capitol Hill, participants connected with MIT alumni in government and explored potential career paths in science policy.

This year’s trip was co-organized by Mallory Kastner, a PhD student in biological oceanography at MIT and Woods Hole Oceanographic Institution (WHOI), and Julian Ufert, a PhD student in chemical engineering at MIT. Ahead of the trip, participants attended training sessions hosted by SPI, the MIT Washington Office, and the MIT Policy Lab. These sessions covered effective ways to translate scientific findings into policy, strategies for a successful advocacy meeting, and hands-on demos of a congressional meeting.

Participants then contacted their representatives’ offices in advance and tailored their talking points to each office’s committees and priorities. This structure gave participants direct experience initiating policy conversations with those actively working on issues they cared about.

Audrey Parker, a PhD student in civil and environmental engineering studying methane abatement, emphasizes the value of connecting scientific research with priorities in Congress: “Through CVD, I had the opportunity to contribute to conversations on science-backed solutions and advocate for the role of research in shaping policies that address national priorities — including energy, sustainability, and climate change.”

To many of the participants, stepping into the shoes of a policy advisor was a welcome diversion from their academic duties and scientific routine. For Alex Fan, an undergraduate majoring in electrical engineering and computer science, the trip was enlightening: “It showed me that student voices really do matter in shaping science policy. Meeting with lawmakers, especially my own representative, Congresswoman Bonamici, made the experience personal and inspiring. It has made me seriously consider a future at the intersection of research and policy.”

“I was truly impressed by the curiosity and dedication of our participants, as well as the preparation they brought to each meeting,” says Ufert. “It was inspiring to watch them grow into confident advocates, leveraging their experience as students and their expertise as researchers to advise on policy needs.”

Kastner adds: “It was eye-opening to see the disconnect between scientists and policymakers. A lot of knowledge we generate as scientists rarely makes it onto the desk of congressional staff, and even more rarely onto the congressperson’s. CVD was an incredibly empowering experience for me as a scientist — not only am I more motivated to broaden my scientific outreach to legislators, but I now also have the skills to do so.”

Funding is the bedrock that allows scientists to carry out research and make discoveries. In the United States, federal funding for science has enabled major technological breakthroughs and advancements in manufacturing and other industrial sectors, and led to important environmental protection standards. While participants found the degree of support for science funding variable among offices from across the political spectrum, they were reassured by the fact that many offices on both sides of the aisle still recognized the significance of science. 


Eight with MIT ties win 2025 Hertz Foundation Fellowships

The fellowships recognize doctoral students who have “the extraordinary creativity and principled leadership necessary to tackle problems others can’t solve.”


The Hertz Foundation announced that it has awarded fellowships to eight MIT affiliates. The prestigious award provides each recipient with five years of doctoral-level research funding (up to a total of $250,000), which gives them an unusual measure of independence in their graduate work to pursue groundbreaking research.

The MIT-affiliated awardees are Matthew Caren ’25; April Qiu Cheng ’24; Arav Karighattam, who begins his PhD at the Institute this fall; Benjamin Lou ’25; Isabelle A. Quaye ’22, MNG ’24; Albert Qin ’24; Ananthan Sadagopan ’24; and Gianfranco (Franco) Yee ’24.

“Hertz Fellows embody the promise of future scientific breakthroughs, major engineering achievements and thought leadership that is vital to our future,” said Stephen Fantone, chair of the Hertz Foundation board of directors and president and CEO of Optikos Corp., in the announcement. “The newest recipients will direct research teams, serve in leadership positions in our government and take the helm of major corporations and startups that impact our communities and the world.”

In addition to funding, fellows receive access to Hertz Foundation programs throughout their lives, including events, mentoring, and networking. They join the ranks of over 1,300 former Hertz Fellows since the fellowship was established in 1963 who are leaders and scholars in a range of technology, science, and engineering fields. Former fellows have contributed to breakthroughs in such areas as advanced medical therapies, computational systems used by billions of people daily, global defense networks, and the recent launch of the James Webb Space Telescope.

This year’s MIT recipients are among a total of 19 Hertz Foundation Fellows scholars selected from across the United States.

Matthew Caren ’25 studied electrical engineering and computer science, mathematics, and music at MIT. His research focuses on computational models of how people use their voices to communicate sound at the Computer Science and Artificial Intelligence Lab (CSAIL) and interpretable real-time machine listening systems at the MIT Music Technology Lab. He spent several summers developing large language model systems and bioinformatics algorithms at Apple and a year researching expressive digital instruments at Stanford University’s Center for Computer Research in Music and Acoustics. He chaired the MIT Schwarzman College of Computing Undergraduate Advisory Group, where he led undergraduate committees on interdisciplinary computing AI and was a founding member of the MIT Voxel Lab for music and arts technology. In addition, Caren has invented novel instruments used by Grammy-winning musicians on international stages. He plans to pursue a doctorate at Stanford.

April Qiu Cheng ’24 majored in physics at MIT, graduating in just three years. Their research focused on black hole phenomenology, gravitational-wave inference, and the use of fast radio bursts as a statistical probe of large-scale structure. They received numerous awards, including an MIT Outstanding Undergraduate Research Award, the MIT Barrett Prize, the Astronaut Scholarship, and the Princeton President’s Fellowship. Cheng contributed to the physics department community by serving as vice president of advocacy for Undergraduate Women in Physics and as the undergraduate representative on the Physics Values Committee. In addition, they have participated in various science outreach programs for middle and high school students. Since graduating, they have been a Fulbright Fellow at the Max Planck Institute for Gravitational Physics, where they have been studying gravitational-wave cosmology. Cheng will begin a doctorate in astrophysics at Princeton in the fall.

Arav Karighattam was home schooled, and by age 14 had completed most of the undergraduate and graduate courses in physics and mathematics at the University of California at Davis. He graduated from Harvard University in 2024 with a bachelor’s degree in mathematics and will attend MIT to pursue a PhD, also in mathematics. Karighattam is fascinated by algebraic number theory and arithmetic geometry and seeks to understand the mysteries underlying the structure of solutions to Diophantine equations. He also wants to apply his mathematical skills to mitigating climate change and biodiversity loss. At a recent conference at MIT titled “Mordell’s Conjecture 100 Years Later,” Karighattam distinguished himself as the youngest speaker to present a paper among graduate students, postdocs, and faculty members.

Benjamin Lou ’25 graduated from MIT in May with a BS in physics and is interested in finding connections between fundamental truths of the universe. One of his research projects applies symplectic techniques to understand the nature of precision measurements using quantum states of light. Another is about geometrically unifying several theorems in quantum mechanics using the Prüfer transformation. For his work, Lou was honored with the Barry Goldwater Scholarship. Lou will pursue his doctorate at MIT, where he plans to work on unifying quantum mechanics and gravity, with an eye toward uncovering experimentally testable predictions. Living with the debilitating disease spinal muscular atrophy, which causes severe, full-body weakness and makes scratchwork unfeasible, Lou has developed a unique learning style emphasizing mental visualization. He also co-founded and helped lead the MIT Assistive Technology Club, dedicated to empowering those with disabilities using creative technologies. He is working on a robotic self-feeding device for those who cannot eat independently.

Isabelle A. Quaye ’22, MNG ’24 studied electrical engineering and computer science as an undergraduate at MIT, with a minor in economics. She was awarded competitive fellowships and scholarships from Hyundai, Intel, D. E. Shaw, and Palantir, and received the Albert G. Hill Prize, given to juniors and seniors who have maintained high academic standards and have made continued contributions to improving the quality of life for underrepresented students at MIT. While obtaining her master’s degree at MIT, she focused on theoretical computer science and systems. She is currently a software engineer at Apple, where she continues to develop frameworks that harness intelligence from data to improve systems and processes. Quaye also believes in contributing to the advancement of science and technology through teaching and has volunteered in summer programs to teach programming and informatics to high school students in the United States and Ghana.

Albert Qin ’24 majored in physics and mathematics at MIT. He also pursued an interest in biology, researching single-molecule approaches to study transcription factor diffusion in living cells and studying the cell circuits that control animal development. His dual interests have motivated him to find common ground between physics and biological fields. Inspired by his MIT undergraduate advisors, he hopes to become a teacher and mentor for aspiring young scientists. Qin is currently pursuing a PhD at Princeton University, addressing questions about the behavior of neural networks — both artificial and biological — using a variety of approaches and ideas from physics and neuroscience.

Ananthan Sadagopan ’24 is currently pursuing a doctorate in biological and biomedical science at Harvard University, focusing on chemical biology and the development of new therapeutic strategies for intractable diseases. He earned his BS at MIT in chemistry and biology in three years and led projects characterizing somatic perturbations of X chromosome inactivation in cancer, developing machine learning tools for cancer dependency prediction, using small molecules for targeted protein relocalization and creating a generalizable strategy to drug the most mutated gene in cancer (TP53). He published as the first author in top journals, such as Cell, during his undergraduate career. He also holds patents related to his work on cancer dependency prediction and drugging TP53. While at the Institute, he served as president of the Chemistry Undergraduate Association, winning both the First-Year and Senior Chemistry Achievement Awards, and was head of the events committee for the MIT Science Olympiad.

Gianfranco (Franco) Yee ’24 majored in biological engineering at MIT, conducting research in the Manalis Lab on chemical gradients in the gut microenvironment and helping to develop a novel gut-on-a-chip platform for culturing organoids under these gradients. His senior thesis extended this work to the microbiome, investigating host-microbe interactions linked to intestinal inflammation and metabolic disorders. Yee also earned a concentration in education at MIT, and is committed to increasing access to STEM resources in underserved communities. He co-founded Momentum AI, an educational outreach program that teaches computer science to high school students across Greater Boston. The inaugural program served nearly 100 students and included remote outreach efforts in Ukraine and China. Yee has also worked with MIT Amphibious Achievement and the MIT Office of Engineering Outreach Programs. He currently attends Gerstner Sloan Kettering Graduate School, where he plans to leverage the gut microbiome and immune system to develop innovative therapeutic treatments.

Former Hertz Fellows include two Nobel laureates; recipients of 11 Breakthrough Prizes and three MacArthur Foundation “genius awards;” and winners of the Turing Award, the Fields Medal, the National Medal of Technology, the National Medal of Science, and the Wall Street Journal Technology Innovation Award. In addition, 54 are members of the National Academies of Sciences, Engineering and Medicine, and 40 are fellows of the American Association for the Advancement of Science. Hertz Fellows hold over 3,000 patents, have founded more than 375 companies, and have created hundreds of thousands of science and technology jobs.


$20 million gift supports theoretical physics research and education at MIT 

Gift from the Leinweber Foundation, in addition to a $5 million commitment from the School of Science, will drive discovery, collaboration, and the next generation of physics leaders.


A $20 million gift from the Leinweber Foundation, in addition to a $5 million commitment from the MIT School of Science, will support theoretical physics research and education at MIT.

Leinweber Foundation gifts to five institutions, totaling $90 million, will establish the newly renamed MIT Center for Theoretical Physics – A Leinweber Institute within the Department of Physics, affiliated with the Laboratory for Nuclear Science at the School of Science, as well as Leinweber Institutes for Theoretical Physics at three other top research universities: the University of Michigan, the University of California at Berkeley, and the University of Chicago, as well as a Leinweber Forum for Theoretical and Quantum Physics at the Institute for Advanced Study.

“MIT has one of the strongest and broadest theory groups in the world,” says Professor Washington Taylor, the director of the newly funded center and a leading researcher in string theory and its connection to observable particle physics and cosmology.

“This landmark endowment from the Leinweber Foundation will enable us to support the best graduate students and postdoctoral researchers to develop their own independent research programs and to connect with other researchers in the Leinweber Institute network. By pledging to support this network and fundamental curiosity-driven science, Larry Leinweber and his family foundation have made a huge contribution to maintaining a thriving scientific enterprise in the United States in perpetuity.”

The Leinweber Foundation’s investment across five institutions — constituting the largest philanthropic commitment ever for theoretical physics research, according to the Science Philanthropy Alliance, a nonprofit organization that supports philanthropic support for science — will strengthen existing programs at each institution and foster collaboration across the universities. Recipient institutions will work both independently and collaboratively to explore foundational questions in theoretical physics. Each institute will continue to shape its own research focus and programs, while also committing to big-picture cross-institutional convenings around topics of shared interest. Moreover, each institute will have significantly more funding for graduate students and postdocs, including fellowship support for three to eight fully endowed Leinweber Physics Fellows at each institute.

“This gift is a commitment to America’s scientific future,” says Larry Leinweber, founder and president of the Leinweber Foundation. “Theoretical physics may seem abstract to many, but it is the tip of the spear for innovation. It fuels our understanding of how the world works and opens the door to new technologies that can shape society for generations. As someone who has had a lifelong fascination with theoretical physics, I hope this investment not only strengthens U.S. leadership in basic science, but also inspires curiosity, creativity, and groundbreaking discoveries for generations to come.”

The gift to MIT will create a postdoc program that, once fully funded, will initially provide support for up to six postdocs, with two selected per year for a three-year program. In addition, the gift will provide student financial support, including fellowship support, for up to six graduate students per year studying theoretical physics. The goal is to attract the top talent to the MIT Center for Theoretical Physics – A Leinweber Institute and support the ongoing research programs in a more robust way.

A portion of the funding will also provide support for visitors, seminars, and other scholarly activities of current postdocs, faculty, and students in theoretical physics, as well as helping with administrative support.

“Graduate students are the heart of our country’s scientific research programs. Support for their education to become the future leaders of the field is essential for the advancement of the discipline,” says Nergis Mavalvala, dean of the MIT School of Science and the Curtis (1963) and Kathleen Marble Professor of Astrophysics.

The Leinweber Foundation gift is the second significant gift for the center. “We are always grateful to Virgil Elings, whose generous gift helped make possible the space that houses the center,” says Deepto Chakrabarty, head of the Department of Physics. Elings PhD ’66, co-founder of Digital Instruments, which designed and sold scanning probe microscopes, made his gift more than 20 years ago to support a space for theoretical physicists to collaborate.

“Gifts like those from Larry Leinweber and Virgil Elings are critical, especially now in this time of uncertain funding from the federal government for support of fundamental scientific research carried out by our nation’s leading postdocs, research scientists, faculty and students,” adds Mavalvala.

Professor Tracy Slatyer, whose work is motivated by questions of fundamental particle physics — particularly the nature and interactions of dark matter — will be the subsequent director of the MIT Center for Theoretical Physics – A Leinweber Institute beginning this fall. Slatyer will join Mavalvala, Taylor, Chakrabarty, and the entirety of the theoretical physics community for a dedication ceremony planned for the near future.

The Leinweber Foundation was founded in 2015 by software entrepreneur Larry Leinweber, and has worked with the Science Philanthropy Alliance since 2021 to shape its philanthropic strategy. “It’s been a true pleasure to work with Larry and the Leinweber family over the past four years and to see their vision take shape,” says France Córdova, president of the Science Philanthropy Alliance. “Throughout his life, Larry has exemplified curiosity, intellectual openness, and a deep commitment to learning. This gift reflects those values, ensuring that generations of scientists will have the freedom to explore, to question, and to pursue ideas that could change how we understand the universe.”


Shaping the future through systems thinking

Ananda Santos Figueiredo, a senior in climate system science and engineering, is charting her own course of impact.


Long before she stepped into a lab, Ananda Santos Figueiredo was stargazing in Brazil, captivated by the cosmos and feeding her curiosity of science through pop culture, books, and the internet. She was drawn to astrophysics for its blend of visual wonder and mathematics.

Even as a child, Santos sensed her aspirations reaching beyond the boundaries of her hometown. “I’ve always been drawn to STEM,” she says. “I had this persistent feeling that I was meant to go somewhere else to learn more, explore, and do more.”

Her parents saw their daughter’s ambitions as an opportunity to create a better future. The summer before her sophomore year of high school, her family moved from Brazil to Florida.  She recalls that moment as “a big leap of faith in something bigger and we had no idea how it would turn out.” She was certain of one thing: She wanted an education that was both technically rigorous and deeply expansive, one that would allow her to pursue all her passions.

At MIT, she found exactly what she was seeking in a community and curriculum that matched her curiosity and ambition. “I’ve always associated MIT with something new and exciting that was grasping towards the very best we can achieve as humans,” Santos says, emphasizing the use of technology and science to significantly impact society. “It’s a place where people aren’t afraid to dream big and work hard to make it a reality.”

As a first-generation college student, she carried the weight of financial stress and the uncertainty that comes with being the first in her family to navigate college in the U.S. But she found a sense of belonging in the MIT community. “Being a first-generation student helped me grow,” she says. “It inspired me to seek out opportunities and help support others too.”

She channeled that energy into student government roles for the undergraduate residence halls. Through Dormitory Council (DormCon) and her dormitory, Simmons Hall, her voice could help shape life on campus. She began serving as reservations chair for her dormitory but ended up becoming president of the dormitory before being elected dining chair and vice president for DormCon. She’s worked to improve dining hall operations and has planned major community events like Simmons Hall’s 20th anniversary and DormCon’s inaugural Field Day.

Now, a senior about to earn her bachelor’s degree, Santos says MIT’s motto, “mens et manus” — “mind and hand” — has deeply resonated with her from the start. “Learning here goes far beyond the classroom,” she says. “I’ve been surrounded by people who are passionate and purposeful. That energy is infectious. It’s changed how I see myself and what I believe is possible.”

Charting her own course

Initially a physics major, Santos’ academic path took a turn after a transformative internship with the World Bank’s data science lab between her sophomore and junior years. There, she used her coding skills to study the impacts of heat waves in the Philippines. The experience opened her eyes to the role technology and data can play in improving lives and broadened her view of what a STEM career could look like.

“I realized I didn’t want to just study the universe — I wanted to change it,” she says. “I wanted to join systems thinking with my interest in the humanities, to build a better world for people and communities."

When MIT launched a new major in climate system science and engineering (Course 1-12) in 2023, Santos was the first student to declare it. The interdisciplinary structure of the program, blending climate science, engineering, energy systems, and policy, gave her a framework to connect her technical skills to real-world sustainability challenges.

She tailored her coursework to align with her passions and career goals, applying her physics background (now her minor) to understand problems in climate, energy, and sustainable systems. “One of the most powerful things about the major is the breadth,” she says. “Even classes that aren’t my primary focus have expanded how I think.”

Hands-on fieldwork has been a cornerstone of her learning. During MIT’s Independent Activities Period (IAP), she studied climate impacts in Hawai’i in the IAP Course 1.091 (Traveling Research Environmental Experiences, or TREX). This year, she studied the design of sustainable polymer systems in Course 1.096/10.496 (Design of Sustainable Polymer Systems) under MISTI’s Global Classroom program. The IAP class brought her to the middle of the Amazon Rainforest to see what the future of plastic production could look like with products from the Amazon. “That experience was incredibly eye opening,” she explains. “It helped me build a bridge between my own background and the kind of problems that I want to solve in the future.”

Santos also found enjoyment beyond labs and lectures. A member of the MIT Shakespeare Ensemble since her first year, she took to the stage in her final spring production of “Henry V,” performing as both the Chorus and Kate. “The ensemble’s collaborative spirit and the way it brings centuries-old texts to life has been transformative,” she adds.

Her passion for the arts also intersected with her interest in the MIT Lecture Series Committee. She helped host a special screening of the film “Sing Sing,” in collaboration with MIT’s Educational Justice Institute (TEJI). That connection led her to enroll in a TEJI course, illustrating the surprising and meaningful ways that different parts of MIT’s ecosystem overlap. “It’s one of the beautiful things about MIT,” she says. “You stumble into experiences that deeply change you.”

Throughout her time at MIT, the community of passionate, sustainability-focused individuals has been a major source of inspiration. She’s been actively involved with the MIT Office of Sustainability’s decarbonization initiatives and participated in the Climate and Sustainability Scholars Program.

Santos acknowledges that working in sustainability can sometimes feel overwhelming. “Tackling the challenges of sustainability can be discouraging,” she says. “The urgency to create meaningful change in a short period of time can be intimidating. But being surrounded by people who are actively working on it is so much better than not working on it at all."

Looking ahead, she plans to pursue graduate studies in technology and policy, with aspirations to shape sustainable development, whether through academia, international organizations, or diplomacy.

“The most fulfilling moments I’ve had at MIT are when I’m working on hard problems while also reflecting on who I want to be, what kind of future I want to help create, and how we can be better and kinder to each other,” she says. “That’s what excites me — solving real problems that matter.”


Overlooked cells might explain the human brain’s huge storage capacity

MIT researchers developed a new model of memory that includes critical contributions from astrocytes, a class of brain cells.


The human brain contains about 86 billion neurons. These cells fire electrical signals that help the brain store memories and send information and commands throughout the brain and the nervous system.

The brain also contains billions of astrocytes — star-shaped cells with many long extensions that allow them to interact with millions of neurons. Although they have long been thought to be mainly supportive cells, recent studies have suggested that astrocytes may play a role in memory storage and other cognitive functions.

MIT researchers have now put forth a new hypothesis for how astrocytes might contribute to memory storage. The architecture suggested by their model would help to explain the brain’s massive storage capacity, which is much greater than would be expected using neurons alone.

“Originally, astrocytes were believed to just clean up around neurons, but there’s no particular reason that evolution did not realize that, because each astrocyte can contact hundreds of thousands of synapses, they could also be used for computation,” says Jean-Jacques Slotine, an MIT professor of mechanical engineering and of brain and cognitive sciences, and an author of the new study.

Dmitry Krotov, a research staff member at the MIT-IBM Watson AI Lab and IBM Research, is the senior author of the open-access paper, which appeared May 23 in the Proceedings of the National Academy of Sciences. Leo Kozachkov PhD ’22 is the paper’s lead author.

Memory capacity

Astrocytes have a variety of support functions in the brain: They clean up debris, provide nutrients to neurons, and help to ensure an adequate blood supply.

Astrocytes also send out many thin tentacles, known as processes, which can each wrap around a single synapse — the junctions where two neurons interact with each other — to create a tripartite (three-part) synapse.

Within the past couple of years, neuroscientists have shown that if the connections between astrocytes and neurons in the hippocampus are disrupted, memory storage and retrieval are impaired.

Unlike neurons, astrocytes can’t fire action potentials, the electrical impulses that carry information throughout the brain. However, they can use calcium signaling to communicate with other astrocytes. Over the past few decades, as the resolution of calcium imaging has improved, researchers have found that calcium signaling also allows astrocytes to coordinate their activity with neurons in the synapses that they associate with.

These studies suggest that astrocytes can detect neural activity, which leads them to alter their own calcium levels. Those changes may trigger astrocytes to release gliotransmitters — signaling molecules similar to neurotransmitters — into the synapse.

“There’s a closed circle between neuron signaling and astrocyte-to-neuron signaling,” Kozachkov says. “The thing that is unknown is precisely what kind of computations the astrocytes can do with the information that they’re sensing from neurons.”

The MIT team set out to model what those connections might be doing and how they might contribute to memory storage. Their model is based on Hopfield networks — a type of neural network that can store and recall patterns.

Hopfield networks, originally developed by John Hopfield and Shun-Ichi Amari in the 1970s and 1980s, are often used to model the brain, but it has been shown that these networks can’t store enough information to account for the vast memory capacity of the human brain. A newer, modified version of a Hopfield network, known as dense associative memory, can store much more information through a higher order of couplings between more than two neurons.

However, it is unclear how the brain could implement these many-neuron couplings at a hypothetical synapse, since conventional synapses only connect two neurons: a presynaptic cell and a postsynaptic cell. This is where astrocytes come into play.

“If you have a network of neurons, which couple in pairs, there’s only a very small amount of information that you can encode in those networks,” Krotov says. “In order to build dense associative memories, you need to couple more than two neurons. Because a single astrocyte can connect to many neurons, and many synapses, it is tempting to hypothesize that there might exist an information transfer between synapses mediated by this biological cell. That was the biggest inspiration for us to look into astrocytes and led us to start thinking about how to build dense associative memories in biology.”

The neuron-astrocyte associative memory model that the researchers developed in their new paper can store significantly more information than a traditional Hopfield network — more than enough to account for the brain’s memory capacity.

Intricate connections

The extensive biological connections between neurons and astrocytes offer support for the idea that this type of model might explain how the brain’s memory storage systems work, the researchers say. They hypothesize that within astrocytes, memories are encoded by gradual changes in the patterns of calcium flow. This information is conveyed to neurons by gliotransmitters released at synapses that astrocyte processes connect to.

“By careful coordination of these two things — the spatial temporal pattern of calcium in the cell and then the signaling back to the neurons — you can get exactly the dynamics you need for this massively increased memory capacity,” Kozachkov says.

One of the key features of the new model is that it treats astrocytes as collections of processes, rather than a single entity. Each of those processes can be considered one computational unit. Because of the high information storage capabilities of dense associative memories, the ratio of the amount of information stored to the number of computational units is very high and grows with the size of the network. This makes the system not only high capacity, but also energy efficient.

“By conceptualizing tripartite synaptic domains — where astrocytes interact dynamically with pre- and postsynaptic neurons — as the brain’s fundamental computational units, the authors argue that each unit can store as many memory patterns as there are neurons in the network. This leads to the striking implication that, in principle, a neuron-astrocyte network could store an arbitrarily large number of patterns, limited only by its size,” says Maurizio De Pitta, an assistant professor of physiology at the Krembil Research Institute at the University of Toronto, who was not involved in the study.

To test whether this model might accurately represent how the brain stores memory, researchers could try to develop ways to precisely manipulate the connections between astrocytes’ processes, then observe how those manipulations affect memory function.

“We hope that one of the consequences of this work could be that experimentalists would consider this idea seriously and perform some experiments testing this hypothesis,” Krotov says.

In addition to offering insight into how the brain may store memory, this model could also provide guidance for researchers working on artificial intelligence. By varying the connectivity of the process-to-process network, researchers could generate a huge range of models that could be explored for different purposes, for instance, creating a continuum between dense associative memories and attention mechanisms in large language models.

“While neuroscience initially inspired key ideas in AI, the last 50 years of neuroscience research have had little influence on the field, and many modern AI algorithms have drifted away from neural analogies,” Slotine says. “In this sense, this work may be one of the first contributions to AI informed by recent neuroscience research.” 


Why are some rocks on the moon highly magnetic? MIT scientists may have an answer

A large impact could have briefly amplified the moon’s weak magnetic field, creating a momentary spike that was recorded in some lunar rocks.


Where did the moon’s magnetism go? Scientists have puzzled over this question for decades, ever since orbiting spacecraft picked up signs of a high magnetic field in lunar surface rocks. The moon itself has no inherent magnetism today. 

Now, MIT scientists may have solved the mystery. They propose that a combination of an ancient, weak magnetic field and a large, plasma-generating impact may have temporarily created a strong magnetic field, concentrated on the far side of the moon.

In a study appearing today in the journal Science Advances, the researchers show through detailed simulations that an impact, such as from a large asteroid, could have generated a cloud of ionized particles that briefly enveloped the moon. This plasma would have streamed around the moon and concentrated at the opposite location from the initial impact. There, the plasma would have interacted with and momentarily amplified the moon’s weak magnetic field. Any rocks in the region could have recorded signs of the heightened magnetism before the field quickly died away.

This combination of events could explain the presence of highly magnetic rocks detected in a region near the south pole, on the moon’s far side. As it happens, one of the largest impact basins — the Imbrium basin — is located in the exact opposite spot on the near side of the moon. The researchers suspect that whatever made that impact likely released the cloud of plasma that kicked off the scenario in their simulations.

“There are large parts of lunar magnetism that are still unexplained,” says lead author Isaac Narrett, a graduate student in the MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS). “But the majority of the strong magnetic fields that are measured by orbiting spacecraft can be explained by this process — especially on the far side of the moon.”

Narrett’s co-authors include Rona Oran and Benjamin Weiss at MIT, along with Katarina Miljkovic at Curtin University, Yuxi Chen and Gábor Tóth at the University of Michigan at Ann Arbor, and Elias Mansbach PhD ’24 at Cambridge University. Nuno Loureiro, professor of nuclear science and engineering at MIT, also contributed insights and advice.

Beyond the sun

Scientists have known for decades that the moon holds remnants of a strong magnetic field. Samples from the surface of the moon, returned by astronauts on NASA’s Apollo missions of the 1960s and 70s, as well as global measurements of the moon taken remotely by orbiting spacecraft, show signs of remnant magnetism in surface rocks, especially on the far side of the moon.

The typical explanation for surface magnetism is a global magnetic field, generated by an internal “dynamo,” or a core of molten, churning material. The Earth today generates a magnetic field through a dynamo process, and it’s thought that the moon once may have done the same, though its much smaller core would have produced a much weaker magnetic field that may not explain the highly magnetized rocks observed, particularly on the moon’s far side.

An alternative hypothesis that scientists have tested from time to time involves a giant impact that generated plasma, which in turn amplified any weak magnetic field. In 2020, Oran and Weiss tested this hypothesis with simulations of a giant impact on the moon, in combination with the solar-generated magnetic field, which is weak as it stretches out to the Earth and moon.

In simulations, they tested whether an impact to the moon could amplify such a solar field, enough to explain the highly magnetic measurements of surface rocks. It turned out that it wasn’t, and their results seemed to rule out plasma-induced impacts as playing a role in the moon’s missing magnetism.

A spike and a jitter

But in their new study, the researchers took a different tack. Instead of accounting for the sun’s magnetic field, they assumed that the moon once hosted a dynamo that produced a magnetic field of its own, albeit a weak one. Given the size of its core, they estimated that such a field would have been about 1 microtesla, or 50 times weaker than the Earth’s field today.

From this starting point, the researchers simulated a large impact to the moon’s surface, similar to what would have created the Imbrium basin, on the moon’s near side. Using impact simulations from Katarina Miljkovic, the team then simulated the cloud of plasma that such an impact would have generated as the force of the impact vaporized the surface material. They adapted a second code, developed by collaborators at the University of Michigan, to simulate how the resulting plasma would flow and interact with the moon’s weak magnetic field.

These simulations showed that as a plasma cloud arose from the impact, some of it would have expanded into space, while the rest would stream around the moon and concentrate on the opposite side. There, the plasma would have compressed and briefly amplified the moon’s weak magnetic field. This entire process, from the moment the magnetic field was amplified to the time that it decays back to baseline, would have been incredibly fast — somewhere around 40 minutes, Narrett says.

Would this brief window have been enough for surrounding rocks to record the momentary magnetic spike? The researchers say, yes, with some help from another, impact-related effect.

They found that an Imbrium-scale impact would have sent a pressure wave through the moon, similar to a seismic shock. These waves would have converged to the other side, where the shock would have “jittered” the surrounding rocks, briefly unsettling the rocks’ electrons — the subatomic particles that naturally orient their spins to any external magnetic field. The researchers suspect the rocks were shocked just as the impact’s plasma amplified the moon’s magnetic field. As the rocks’ electrons settled back, they assumed a new orientation, in line with the momentary high magnetic field.

“It’s as if you throw a 52-card deck in the air, in a magnetic field, and each card has a compass needle,” Weiss says. “When the cards settle back to the ground, they do so in a new orientation. That’s essentially the magnetization process.”

The researchers say this combination of a dynamo plus a large impact, coupled with the impact’s shockwave, is enough to explain the moon’s highly magnetized surface rocks — particularly on the far side. One way to know for sure is to directly sample the rocks for signs of shock, and high magnetism. This could be a possibility, as the rocks lie on the far side, near the lunar south pole, where missions such as NASA’s Artemis program plan to explore.

“For several decades, there’s been sort of a conundrum over the moon’s magnetism — is it from impacts or is it from a dynamo?” Oran says. “And here we’re saying, it’s a little bit of both. And it’s a testable hypothesis, which is nice.”

The team’s simulations were carried out using the MIT SuperCloud. This research was supported, in part, by NASA. 


New research, data advance understanding of early planetary formation

Led by Assistant Professor Richard Teague, a team of international astronomers has released a collection of papers and public data furthering our understanding of planet formation.


A team of international astronomers led by Richard Teague, the Kerr-McGee Career Development Professor in the Department of Earth, Atmospheric and Planetary Sciences (EAPS) has gathered the most sensitive and detailed observations of 15 protoplanetary disks to date, giving the astronomy community a new look at the mechanisms of early planetary formation.

“The new approaches we’ve developed to gather this data and images are like switching from reading glasses to high-powered binoculars — they reveal a whole new level of detail in these planet-forming systems,” says Teague.

Their open-access findings were published in a special collection of 17 papers in the Astrophysical Journal of Letters, with several more coming out this summer. The report sheds light on a breadth of questions, including ways to calculate the mass of a disk by measuring its gravitational influence and extracting rotational velocity profiles to a precision of meters per second.

Protoplanetary disks are a collection of dust and gas around young stars, from which planets form. Observing the dust in these disks is easier because it is brighter, but the information that can be gleaned from dust alone is only a snapshot of what is going on. Teague’s research focus has shifted attention to the gas in these systems, as they can tell us more about the dynamics in a disk, including properties such as gravity, velocity, and mass.

To achieve the resolution necessary to study gas, the exoALMA program spent five years coordinating longer observation windows on the Atacama Large Millimeter/submillimeter Array (ALMA) in Chile. As a result, the international team of astronomers, many of whom are early-career scientists, were able to collect some of the most detailed images ever taken of protoplanetary disks.

“The impressive thing about the data is that it’s so good, the community is developing new tools to extract signatures from planets,” says Marcelo Barraza-Alfaro, a postdoc in the Planet Formation Lab and a member of the exoALMA project. Several new techniques to improve and calibrate the images taken were developed to maximize the higher resolution and sensitivity that was used.

As a result, “we are seeing new things that require us to modify our understanding of what’s going on in protoplanetary disks,” he says.

One of the papers with the largest EAPS influence explores planetary formation through vortices. It has been known for some time that the simple model of formation often proposed, where dust grains clump together and “snowball” into a planetary core, is not enough. One possible way to help is through vortices, or localized perturbations in the gas that pull dust into the center. Here, they are more likely to clump, the way soap bubbles collect in a draining tub.

“We can see the concentration of dust in different regions, but we cannot see how it is moving,” says Lisa Wölfer, another postdoc in the Planet Formation Lab at MIT and first author on the paper. While astronomers can see that the dust has gathered, there isn’t enough information to rule out how it got to that point.

“Only through the dynamics in the gas can we actually confirm that it’s a vortex, and not something else, creating the structure,” she says.

During the data collection period, Teague, Wölfer, and Barraza-Alfaro developed simple models of protoplanetary disks to compare to their observations. When they got the data back, however, the models couldn’t explain what they were seeing.

“We saw the data and nothing worked anymore. It was way too complicated,” says Teague. “Before, everyone thought they were not dynamic. That’s completely not the case.”

The team was forced to reevaluate their models and work with more complex ones incorporating more motion in the gas, which take more time and resources to run. But early results look promising.

“We see that the patterns look very similar; we think this is the best test case to study further with more observations,” says Wölfer.

The new data, which have been made public, come at a fortuitous time: ALMA will be going dark for a period in the next few years while it undergoes upgrades. During this time, astronomers can continue the monumental process of sifting through all the data.

“It’s going to just keep on producing results for years and years to come,” says Teague.


MIT physicists discover a new type of superconductor that’s also a magnet

The “one-of-a-kind” phenomenon was observed in ordinary graphite.


Magnets and superconductors go together like oil and water — or so scientists have thought. But a new finding by MIT physicists is challenging this century-old assumption.

In a paper appearing today in the journal Nature, the physicists report that they have discovered a “chiral superconductor” — a material that conducts electricity without resistance, and also, paradoxically, is intrinsically magnetic. What’s more, they observed this exotic superconductivity in a surprisingly ordinary material: graphite, the primary material in pencil lead.

Graphite is made from many layers of graphene — atomically thin, lattice-like sheets of carbon atoms — that are stacked together and can easily flake off when pressure is applied, as when pressing down to write on a piece of paper. A single flake of graphite can contain several million sheets of graphene, which are normally stacked such that every other layer aligns. But every so often, graphite contains tiny pockets where graphene is stacked in a different pattern, resembling a staircase of offset layers.

The MIT team has found that when four or five sheets of graphene are stacked in this “rhombohedral” configuration, the resulting structure can exhibit exceptional electronic properties that are not seen in graphite as a whole.

In their new study, the physicists isolated microscopic flakes of rhombohedral graphene from graphite, and subjected the flakes to a battery of electrical tests. They found that when the flakes are cooled to 300 millikelvins (about -273 degrees Celsius), the material turns into a superconductor, meaning that any electrical current passing through the material can flow through without resistance.

They also found that when they swept an external magnetic field up and down, the flakes could be switched between two different superconducting states, just like a magnet. This suggests that the superconductor has some internal, intrinsic magnetism. Such switching behavior is absent in other superconductors.

“The general lore is that superconductors do not like magnetic fields,” says Long Ju, assistant professor of physics at MIT. “But we believe this is the first observation of a superconductor that behaves as a magnet with such direct and simple evidence. And that’s quite a bizarre thing because it is against people’s general impression on superconductivity and magnetism.”

Ju is senior author of the study, which includes MIT co-authors Tonghang Han, Zhengguang Lu, Zach Hadjri, Lihan Shi, Zhenghan Wu, Wei Xu, Yuxuan Yao, Jixiang Yang, Junseok Seo, Shenyong Ye, Muyang Zhou, and Liang Fu, along with collaborators from Florida State University, the University of Basel in Switzerland, and the National Institute for Materials Science in Japan.

Graphene twist

In everyday conductive materials, electrons flow through in a chaotic scramble, whizzing by each other, and pinging off the material’s atomic latticework. Each time an electron scatters off an atom, it has, in essence, met some resistance, and loses some energy as a result, normally in the form of heat. In contrast, when certain materials are cooled to ultracold temperatures, they can become superconducting, meaning that the material can allow electrons to pair up, in what physicists term “Cooper pairs.” Rather than scattering away, these electron pairs glide through a material without resistance. With a superconductor, then, no energy is lost in translation.

Since superconductivity was first observed in 1911, physicists have shown many times over that zero electrical resistance is a hallmark of a superconductor. Another defining property was first observed in 1933, when the physicist Walther Meissner discovered that a superconductor will expel an external magnetic field. This “Meissner effect” is due in part to a superconductor’s electron pairs, which collectively act to push away any magnetic field.

Physicists have assumed that all superconducting materials should exhibit both zero electrical resistance, and a natural magnetic repulsion. Indeed, these two properties are what could enable Maglev, or “magnetic levitation” trains, whereby a superconducting rail repels and therefore levitates a magnetized car.

Ju and his colleagues had no reason to question this assumption as they carried out their experiments at MIT. In the last few years, the team has been exploring the electrical properties of pentalayer rhombohedral graphene. The researchers have observed surprising properties in the five-layer, staircase-like graphene structure, most recently that it enables electrons to split into fractions of themselves. This phenomenon occurs when the pentalayer structure is placed atop a sheet of hexagonal boron nitride (a material similar to graphene), and slightly offset by a specific angle, or twist. 

Curious as to how electron fractions might change with changing conditions, the researchers followed up their initial discovery with similar tests, this time by misaligning the graphene and hexagonal boron nitride structures. To their surprise, they found that when they misaligned the two materials and sent an electrical current through, at temperatures less than 300 millikelvins, they measured zero resistance. It seemed that the phenomenon of electron fractions disappeared, and what emerged instead was superconductivity.

The researchers went a step further to see how this new superconducting state would respond to an external magnetic field. They applied a magnet to the material, along with a voltage, and measured the electrical current coming out of the material. As they dialed the magnetic field from negative to positive (similar to a north and south polarity) and back again, they observed that the material maintained its superconducting, zero-resistance state, except in two instances, once at either magnetic polarity. In these instances, the resistance briefly spiked, before switching back to zero, and returning to a superconducting state.

“If this were a conventional superconductor, it would just remain at zero resistance, until the magnetic field reaches a critical point, where superconductivity would be killed,” Zach Hadjri, a first-year student in the group, says. “Instead, this material seems to switch between two superconducting states, like a magnet that starts out pointing upward, and can flip downwards when you apply a magnetic field. So it looks like this is a superconductor that also acts like a magnet. Which doesn’t make any sense!”

“One of a kind”

As counterintuitive as the discovery may seem, the team observed the same phenomenon in six similar samples. They suspect that the unique configuration of rhombohedral graphene is the key. The material has a very simple arrangement of carbon atoms. When cooled to ultracold temperatures, the thermal fluctuation is minimized, allowing any electrons flowing through the material to slow down, sense each other, and interact.

Such quantum interactions can lead electrons to pair up and superconduct. These interactions can also encourage electrons to coordinate. Namely, electrons can collectively occupy one of two opposite momentum states, or “valleys.” When all electrons are in one valley, they effectively spin in one direction, versus the opposite direction. In conventional superconductors, electrons can occupy either valley, and any pair of electrons is typically made from electrons of opposite valleys that cancel each other out. The pair overall then, has zero momentum, and does not spin.

In the team’s material structure, however, they suspect that all electrons interact such that they share the same valley, or momentum state. When electrons then pair up, the superconducting pair overall has a “non-zero” momentum, and spinning, that, along with many other pairs, can amount to an internal, superconducting magnetism.

“You can think of the two electrons in a pair spinning clockwise, or counterclockwise, which corresponds to a magnet pointing up, or down,” Tonghang Han, a fifth-year student in the group, explains. “So we think this is the first observation of a superconductor that behaves as a magnet due to the electrons’ orbital motion, which is known as a chiral superconductor. It’s one of a kind. It is also a candidate for a topological superconductor which could enable robust quantum computation.”

“Everything we’ve discovered in this material has been completely out of the blue,” says Zhengguang Lu, a former postdoc in the group and now an assistant professor at Florida State University. “But because this is a simple system, we think we have a good chance of understanding what is going on, and could demonstrate some very profound and deep physics principles.”

“It is truly remarkable that such an exotic chiral superconductor emerges from such simple ingredients,” adds Liang Fu, professor of physics at MIT. “Superconductivity in rhombodedral graphene will surely have a lot to offer.”     

The part of the research carried out at MIT was supported by the U.S. Department of Energy and a MathWorks Fellowship. This research was carried out, in part, using facilities at MIT.nano.


Study: Climate change may make it harder to reduce smog in some regions

Ground-level ozone in North America and Western Europe may become less sensitive to cutting NOx emissions. The opposite may occur in Northeast Asia.


Global warming will likely hinder our future ability to control ground-level ozone, a harmful air pollutant that is a primary component of smog, according to a new MIT study.

The results could help scientists and policymakers develop more effective strategies for improving both air quality and human health. Ground-level ozone causes a host of detrimental health impacts, from asthma to heart disease, and contributes to thousands of premature deaths each year.

The researchers’ modeling approach reveals that, as the Earth warms due to climate change, ground-level ozone will become less sensitive to reductions in nitrogen oxide emissions in eastern North America and Western Europe. In other words, it will take greater nitrogen oxide emission reductions to get the same air quality benefits.

However, the study also shows that the opposite would be true in northeast Asia, where cutting emissions would have a greater impact on reducing ground-level ozone in the future. 

The researchers combined a climate model that simulates meteorological factors, such as temperature and wind speeds, with a chemical transport model that estimates the movement and composition of chemicals in the atmosphere.

By generating a range of possible future outcomes, the researchers’ ensemble approach better captures inherent climate variability, allowing them to paint a fuller picture than many previous studies.

“Future air quality planning should consider how climate change affects the chemistry of air pollution. We may need steeper cuts in nitrogen oxide emissions to achieve the same air quality goals,” says Emmie Le Roy, a graduate student in the MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS) and lead author of a paper on this study.

Her co-authors include Anthony Y.H. Wong, a postdoc in the MIT Center for Sustainability Science and Strategy; Sebastian D. Eastham, principal research scientist in the MIT Center for Sustainability Science and Strategy; Arlene Fiore, the Peter H. Stone and Paola Malanotte Stone Professor of EAPS; and senior author Noelle Selin, a professor in the Institute for Data, Systems, and Society (IDSS) and EAPS. The research appears today in Environmental Science and Technology.

Controlling ozone

Ground-level ozone differs from the stratospheric ozone layer that protects the Earth from harmful UV radiation. It is a respiratory irritant that is harmful to the health of humans, animals, and plants.

Controlling ground-level ozone is particularly challenging because it is a secondary pollutant, formed in the atmosphere by complex reactions involving nitrogen oxides and volatile organic compounds in the presence of sunlight.

“That is why you tend to have higher ozone days when it is warm and sunny,” Le Roy explains.

Regulators typically try to reduce ground-level ozone by cutting nitrogen oxide emissions from industrial processes. But it is difficult to predict the effects of those policies because ground-level ozone interacts with nitrogen oxide and volatile organic compounds in nonlinear ways.

Depending on the chemical environment, reducing nitrogen oxide emissions could cause ground-level ozone to increase instead.

“Past research has focused on the role of emissions in forming ozone, but the influence of meteorology is a really important part of Emmie’s work,” Selin says.

To conduct their study, the researchers combined a global atmospheric chemistry model with a climate model that simulate future meteorology.

They used the climate model to generate meteorological inputs for each future year in their study, simulating factors such as likely temperature and wind speeds, in a way that captures the inherent variability of a region’s climate.

Then they fed those inputs to the atmospheric chemistry model, which calculates how the chemical composition of the atmosphere would change because of meteorology and emissions.

The researchers focused on Eastern North America, Western Europe, and Northeast China, since those regions have historically high levels of the precursor chemicals that form ozone and well-established monitoring networks to provide data.

They chose to model two future scenarios, one with high warming and one with low warming, over a 16-year period between 2080 and 2095. They compared them to a historical scenario capturing 2000 to 2015 to see the effects of a 10 percent reduction in nitrogen oxide emissions.

Capturing climate variability

“The biggest challenge is that the climate naturally varies from year to year. So, if you want to isolate the effects of climate change, you need to simulate enough years to see past that natural variability,” Le Roy says.

They could overcome that challenge due to recent advances in atmospheric chemistry modeling and by taking advantage of parallel computing to simulate multiple years at the same time. They simulated five 16-year realizations, resulting in 80 model years for each scenario.

The researchers found that eastern North America and Western Europe are especially sensitive to increases in nitrogen oxide emissions from the soil, which are natural emissions driven by increases in temperature.

Due to that sensitivity, as the Earth warms and more nitrogen oxide from soil enters the atmosphere, reducing nitrogen oxide emissions from human activities will have less of an impact on ground-level ozone.

“This shows how important it is to improve our representation of the biosphere in these models to better understand how climate change may impact air quality,” Le Roy says.

On the other hand, since industrial processes in northeast Asia cause more ozone per unit of nitrogen oxide emitted, cutting emissions there would cause greater reductions in ground-level ozone in future warming scenarios.

“But I wouldn’t say that is a good thing because it means that, overall, there are higher levels of ozone,” Le Roy adds.

Running detailed meteorology simulations, rather than relying on annual average weather data, gave the researchers a more complete picture of the potential effects on human health.

“Average climate isn’t the only thing that matters. One high ozone day, which might be a statistical anomaly, could mean we don’t meet our air quality target and have negative human health impacts that we should care about,” Le Roy says.

In the future, the researchers want to continue exploring the intersection of meteorology and air quality. They also want to expand their modeling approach to consider other climate change factors with high variability, like wildfires or biomass burning.

“We’ve shown that it is important for air quality scientists to consider the full range of climate variability, even if it is hard to do in your models, because it really does affect the answer that you get,” says Selin.

This work is funded, in part, by the MIT Praecis Presidential Fellowship, the J.H. and E.V. Wade Fellowship, and the MIT Martin Family Society of Fellows for Sustainability.


Daily mindfulness practice reduces anxiety for autistic adults

After six weeks of practicing mindfulness with the help of a smartphone app, adults with autism reported lasting improvements in their well-being.


Just 10 to 15 minutes of mindfulness practice a day led to reduced stress and anxiety for autistic adults who participated in a study led by scientists at MIT’s McGovern Institute for Brain Research. Participants in the study used a free smartphone app to guide their practice, giving them the flexibility to practice when and where they chose.

Mindfulness is a state in which the mind is focused only on the present moment. It is a way of thinking that can be cultivated with practice, often through meditation or breathing exercises — and evidence is accumulating that practicing mindfulness has positive effects on mental health. The new open-access study, reported April 8 in the journal Mindfulness, adds to that evidence, demonstrating clear benefits for autistic adults.

“Everything you want from this on behalf of somebody you care about happened: reduced reports of anxiety, reduced reports of stress, reduced reports of negative emotions, and increased reports of positive emotions,” says McGovern investigator and MIT Professor John Gabrieli, who led the research with Liron Rozenkrantz, an investigator at the Azrieli Faculty of Medicine at Bar-Ilan University in Israel and a research affiliate in Gabrieli’s lab. “Every measure that we had of well-being moved in significantly in a positive direction,” adds Gabrieli, who is also the Grover Hermann Professor of Health Sciences and Technology and a professor of brain and cognitive sciences at MIT.

One of the reported benefits of practicing mindfulness is that it can reduce the symptoms of anxiety disorders. This prompted Gabrieli and his colleagues to wonder whether it might benefit adults with autism, who tend to report above average levels of anxiety and stress, which can interfere with daily living and quality of life. As many as 65 percent of autistic adults may also have an anxiety disorder.

Gabrieli adds that the opportunity for autistic adults to practice mindfulness with an app, rather than needing to meet with a teacher or class, seemed particularly promising. “The capacity to do it at your own pace in your own home, or any environment you like, might be good for anybody,” he says. “But maybe especially for people for whom social interactions can sometimes be challenging.”

The research team, including Cindy Li, the autism recruitment and outreach coordinator in Gabrieli’s lab, recruited 89 autistic adults to participate in their study. Those individuals were split into two groups: one would try the mindfulness practice for six weeks, while the others would wait and try the intervention later.

Participants were asked to practice daily using an app called Healthy Minds, which guides participants through seated or active meditations, each lasting 10 to 15 minutes. Participants reported that they found the app easy to use and had little trouble making time for the daily practice.

After six weeks, participants reported significant reductions in anxiety and perceived stress. These changes were not experienced by the wait-list group, which served as a control. However, after their own six weeks of practice, people in the wait-list group reported similar benefits. “We replicated the result almost perfectly. Every positive finding we found with the first sample we found with the second sample,” Gabrieli says.

The researchers followed up with study participants after another six weeks. Almost everyone had discontinued their mindfulness practice — but remarkably, their gains in well-being had persisted. Based on this finding, the team is eager to further explore the long-term effects of mindfulness practice in future studies. “There’s a hypothesis that a benefit of gaining mindfulness skills or habits is they stick with you over time — that they become incorporated in your daily life,” Gabrieli says. “If people are using the approach to being in the present and not dwelling on the past or worrying about the future, that’s what you want most of all. It’s a habit of thought that’s powerful and helpful.”

Even as they plan future studies, the researchers say they are already convinced that mindfulness practice can have clear benefits for autistic adults. “It’s possible mindfulness would be helpful at all kinds of ages,” Gabrieli says. But he points out the need is particularly great for autistic adults, who usually have fewer resources and support than autistic children have access to through their schools. Gabrieli is eager for more people with autism to try the Healthy Minds app. “Having scientifically proven resources for adults who are no longer in school systems might be a valuable thing,” he says.

This research was funded, in part, by The Hock E. Tan and K. Lisa Yang Center for Autism Research at MIT and the Yang Tan Collective.


In Down syndrome mice, 40Hz light and sound improve cognition, neurogenesis, connectivity

New evidence suggests sensory stimulation of gamma-frequency brain rhythm may promote broad-based restorative neurological health response.


Studies by a growing number of labs have identified neurological health benefits from exposing human volunteers or animal models to light, sound, and/or tactile stimulation at the brain’s “gamma” frequency rhythm of 40Hz. In the latest such research at The Picower Institute for Learning and Memory and Alana Down Syndrome Center at MIT, scientists found that 40Hz sensory stimulation improved cognition and circuit connectivity and encouraged the growth of new neurons in mice genetically engineered to model Down syndrome.

Li-Huei Tsai, Picower Professor at MIT and senior author of the new study in PLOS ONE, says that the results are encouraging, but also cautions that much more work is needed to test whether the method, called GENUS (for gamma entrainment using sensory stimulation), could provide clinical benefits for people with Down syndrome. Her lab has begun a small study with human volunteers at MIT.

“While this work, for the first time, shows beneficial effects of GENUS on Down syndrome using an imperfect mouse model, we need to be cautious, as there is not yet data showing whether this also works in humans,” says Tsai, who directs The Picower Institute and The Alana Center, and is a member of MIT’s Department of Brain and Cognitive Sciences faculty.

Still, she says, the newly published article adds evidence that GENUS can promote a broad-based, restorative, “homeostatic” health response in the brain amid a wide variety of pathologies. Most GENUS studies have addressed Alzheimer’s disease in humans or mice, but others have found benefits from the stimulation for conditions such as “chemo brain” and stroke.

Down syndrome benefits

In the study, the research team led by postdoc Md Rezaul Islam and Brennan Jackson PhD ’23 worked with the commonly used “Ts65Dn” Down syndrome mouse model. The model recapitulates key aspects of the disorder, although it does not exactly mirror the human condition, which is caused by carrying an extra copy of chromosome 21.

In the first set of experiments in the paper, the team shows that an hour a day of 40Hz light and sound exposure for three weeks was associated with significant improvements on three standard short-term memory tests — two involving distinguishing novelty from familiarity and one involving spatial navigation. Because these kinds of memory tasks involve a brain region called the hippocampus, the researchers looked at neural activity there and measured a significant increase in activity indicators among mice that received the GENUS stimulation versus those that did not.

To better understand how stimulated mice could show improved cognition, the researchers examined whether cells in the hippocampus changed how they express their genes. To do this, the team used a technique called single cell RNA sequencing, which provided a readout of how nearly 16,000 individual neurons and other cells transcribed their DNA into RNA, a key step in gene expression. Many of the genes whose expression varied most prominently in neurons between the mice that received stimulation and those that did not were directly related to forming and organizing neural circuit connections called synapses.

To confirm the significance of that finding, the researchers directly examined the hippocampus in stimulated and control mice. They found that in a critical subregion, the dentate gyrus, stimulated mice had significantly more synapses.

Diving deeper

The team not only examined gene expression across individual cells, but also analyzed those data to assess whether there were patterns of coordination across multiple genes. Indeed, they found several such “modules” of co-expression. Some of this evidence further substantiated the idea that 40Hz-stimulated mice made important improvements in synaptic connectivity, but another key finding highlighted a role for TCF4, a key regulator of gene transcription needed for generating new neurons, or “neurogenesis.”  

The team’s analysis of genetic data suggested that TCF4 is underexpressed in Down syndrome mice, but the researchers saw improved TCF4 expression in GENUS-stimulated mice. When the researchers went to the lab bench to determine whether the mice also exhibited a difference in neurogenesis, they found direct evidence that stimulated mice exhibited more than unstimulated mice in the dentate gyrus. These increases in TCF4 expression and neurogenesis are only correlational, the researchers noted, but they hypothesize that the increase in new neurons likely helps explain at least some of the increase in new synapses and improved short-term memory function.

“The increased putative functional synapses in the dentate gyrus is likely related to the increased adult neurogenesis observed in the Down syndrome mice following GENUS treatment,” Islam says.

This study is the first to document that GENUS is associated with increased neurogenesis.

The analysis of gene expression modules also yielded other key insights. One is that a cluster of genes whose expression typically declines with normal aging, and in Alzheimer’s disease, remained at higher expression levels among mice who received 40Hz sensory stimulation.

And the researchers also found evidence that mice that received stimulation retained more cells in the hippocampus that express Reelin. Reelin-expressing neurons are especially vulnerable in Alzheimer’s disease, but expression of the protein is associated with cognitive resilience amid Alzheimer’s disease pathology, which Ts65Dn mice develop. About 90 percent of people with Down syndrome develop Alzheimer’s disease, typically after the age of 40.

“In this study, we found that GENUS enhances the percentage of Reln+ neurons in hippocampus of a mouse model of Down syndrome, suggesting that GENUS may promote cognitive resilience,” Islam says.

Taken together with other studies, Tsai and Islam say, the new results add evidence that GENUS helps to stimulate the brain at the cellular and molecular level to mount a homeostatic response to aberrations caused by disease pathology, be it neurodegeneration in Alzheimer’s, demyelination in chemo brain, or deficits of neurogenesis in Down syndrome.

But the authors also cautioned that the study had limits. Not only is the Ts65Dn model an imperfect reflection of human Down syndrome, but also the mice used were all male. Moreover, the cognitive tests in the study only measured short-term memory. And finally, while the study was novel for extensively examining gene expression in the hippocampus amid GENUS stimulation, it did not look at changes in other cognitively critical brain regions, such as the prefrontal cortex.

In addition to Jackson, Islam, and Tsai, the paper’s other authors are Maeesha Tasnim Naomi, Brooke Schatz, Noah Tan, Mitchell Murdock, Dong Shin Park, Daniela Rodrigues Amorim, Fred Jiang, S. Sebastian Pineda, Chinnakkaruppan Adaikkan, Vanesa Fernandez, Ute Geigenmuller, Rosalind Mott Firenze, Manolis Kellis, and Ed Boyden.

Funding for the study came from the Alana Down Syndrome Center at MIT and the Alana USA Foundation, the U.S. National Science Foundation, the La Caixa Banking Foundation, a European Molecular Biology Organization long-term postdoctoral fellowship, Barbara J. Weedon, Henry E. Singleton, and the Hubolow family.


Biologists identify targets for new pancreatic cancer treatments

Their study yielded hundreds of “cryptic” peptides that are found only on pancreatic tumor cells and could be targeted by vaccines or engineered T cells.


Researchers from MIT and Dana-Farber Cancer Institute have discovered that a class of peptides expressed in pancreatic cancer cells could be a promising target for T-cell therapies and other approaches that attack pancreatic tumors.

Known as cryptic peptides, these molecules are produced from sequences in the genome that were not thought to encode proteins. Such peptides can also be found in some healthy cells, but in this study, the researchers identified about 500 that appear to be found only in pancreatic tumors.

The researchers also showed they could generate T cells targeting those peptides. Those T cells were able to attack pancreatic tumor organoids derived from patient cells, and they significantly slowed down tumor growth in a study of mice.

“Pancreas cancer is one of the most challenging cancers to treat. This study identifies an unexpected vulnerability in pancreas cancer cells that we may be able to exploit therapeutically,” says Tyler Jacks, the David H. Koch Professor of Biology at MIT and a member of the Koch Institute for Integrative Cancer Research.

Jacks and William Freed-Pastor, a physician-scientist in the Hale Family Center for Pancreatic Cancer Research at Dana-Farber Cancer Institute and an assistant professor at Harvard Medical School, are the senior authors of the study, which appears today in Science. Zackery Ely PhD ’22 and Zachary Kulstad, a former research technician at Dana-Farber Cancer Institute and the Koch Institute, are the lead authors of the paper.

Cryptic peptides

Pancreatic cancer has one of the lowest survival rates of any cancer — about 10 percent of patients survive for five years after their diagnosis.

Most pancreatic cancer patients receive a combination of surgery, radiation treatment, and chemotherapy. Immunotherapy treatments such as checkpoint blockade inhibitors, which are designed to help stimulate the body’s own T cells to attack tumor cells, are usually not effective against pancreatic tumors. However, therapies that deploy T cells engineered to attack tumors have shown promise in clinical trials.

These therapies involve programming the T-cell receptor (TCR) of T cells to recognize a specific peptide, or antigen, found on tumor cells. There are many efforts underway to identify the most effective targets, and researchers have found some promising antigens that consist of mutated proteins that often show up when pancreatic cancer genomes are sequenced.

In the new study, the MIT and Dana-Farber team wanted to extend that search into tissue samples from patients with pancreatic cancer, using immunopeptidomics — a strategy that involves extracting the peptides presented on a cell surface and then identifying the peptides using mass spectrometry.

Using tumor samples from about a dozen patients, the researchers created organoids — three-dimensional growths that partially replicate the structure of the pancreas. The immunopeptidomics analysis, which was led by Jennifer Abelin and Steven Carr at the Broad Institute, found that the majority of novel antigens found in the tumor organoids were cryptic antigens. Cryptic peptides have been seen in other types of tumors, but this is the first time they have been found in pancreatic tumors.

Each tumor expressed an average of about 250 cryptic peptides, and in total, the researchers identified about 1,700 cryptic peptides.

“Once we started getting the data back, it just became clear that this was by far the most abundant novel class of antigens, and so that’s what we wound up focusing on,” Ely says.

The researchers then performed an analysis of healthy tissues to see if any of these cryptic peptides were found in normal cells. They found that about two-thirds of them were also found in at least one type of healthy tissue, leaving about 500 that appeared to be restricted to pancreatic cancer cells.

“Those are the ones that we think could be very good targets for future immunotherapies,” Freed-Pastor says.

Programmed T cells

To test whether these antigens might hold potential as targets for T-cell-based treatments, the researchers exposed about 30 of the cancer-specific antigens to immature T cells and found that 12 of them could generate large populations of T cells targeting those antigens.

The researchers then engineered a new population of T cells to express those T-cell receptors. These engineered T cells were able to destroy organoids grown from patient-derived pancreatic tumor cells. Additionally, when the researchers implanted the organoids into mice and then treated them with the engineered T cells, tumor growth was significantly slowed.

This is the first time that anyone has demonstrated the use of T cells targeting cryptic peptides to kill pancreatic tumor cells. Even though the tumors were not completely eradicated, the results are promising, and it is possible that the T-cells’ killing power could be strengthened in future work, the researchers say.

Freed-Pastor’s lab is also beginning to work on a vaccine targeting some of the cryptic antigens, which could help stimulate patients’ T cells to attack tumors expressing those antigens. Such a vaccine could include a collection of the antigens identified in this study, including those frequently found in multiple patients.

This study could also help researchers in designing other types of therapy, such as T cell engagers — antibodies that bind an antigen on one side and T cells on the other, which allows them to redirect any T cell to kill tumor cells.

Any potential vaccine or T cell therapy is likely a few years away from being tested in patients, the researchers say.

The research was funded in part by the Hale Family Center for Pancreatic Cancer Research, the Lustgarten Foundation, Stand Up To Cancer, the Pancreatic Cancer Action Network, the Burroughs Wellcome Fund, a Conquer Cancer Young Investigator Award, the National Institutes of Health, and the National Cancer Institute.


Dopamine signals when a fear can be forgotten

Study shows how a dopamine circuit enables mice to extinguish fear after a peril has passed, opening new avenues for understanding and potentially treating fear-related disorders.


Dangers come but dangers also go, and when they do, the brain has an “all-clear” signal that teaches it to extinguish its fear. A new study in mice by MIT neuroscientists shows that the signal is the release of dopamine along a specific interregional brain circuit. The research therefore pinpoints a potentially critical mechanism of mental health, restoring calm when it works, but prolonging anxiety or even post-traumatic stress disorder when it doesn’t.

“Dopamine is essential to initiate fear extinction,” says Michele Pignatelli di Spinazzola, co-author of the new study from the lab of senior author Susumu Tonegawa, Picower Professor of biology and neuroscience at the RIKEN-MIT Laboratory for Neural Circuit Genetics within The Picower Institute for Learning and Memory at MIT, and a Howard Hughes Medical Institute (HHMI) investigator.

In 2020, Tonegawa’s lab showed that learning to be afraid, and then learning when that’s no longer necessary, result from a competition between populations of cells in the brain’s amygdala region. When a mouse learns that a place is “dangerous” (because it gets a little foot shock there), the fear memory is encoded by neurons in the anterior of the basolateral amygdala (aBLA) that express the gene Rspo2. When the mouse then learns that a place is no longer associated with danger (because they wait there and the zap doesn’t recur), neurons in the posterior basolateral amygdala (pBLA) that express the gene Ppp1r1b encode a new fear extinction memory that overcomes the original dread. Notably, those same neurons encode feelings of reward, helping to explain why it feels so good when we realize that an expected danger has dwindled.

In the new study, the lab, led by former members Xiangyu Zhang and Katelyn Flick, sought to determine what prompts these amygdala neurons to encode these memories. The rigorous set of experiments the team reports in the Proceedings of the National Academy of Sciences show that it’s dopamine sent to the different amygdala populations from distinct groups of neurons in the ventral tegmental area (VTA).

“Our study uncovers a precise mechanism by which dopamine helps the brain unlearn fear,” says Zhang, who also led the 2020 study and is now a senior associate at Orbimed, a health care investment firm. “We found that dopamine activates specific amygdala neurons tied to reward, which in turn drive fear extinction. We now see that unlearning fear isn’t just about suppressing it — it’s a positive learning process powered by the brain’s reward machinery. This opens up new avenues for understanding and potentially treating fear-related disorders, like PTSD.”

Forgetting fear

The VTA was the lab’s prime suspect to be the source of the signal because the region is well known for encoding surprising experiences and instructing the brain, with dopamine, to learn from them. The first set of experiments in the paper used multiple methods for tracing neural circuits to see whether and how cells in the VTA and the amygdala connect. They found a clear pattern: Rspo2 neurons were targeted by dopaminergic neurons in the anterior and left and right sides of the VTA. Ppp1r1b neurons received dopaminergic input from neurons in the center and posterior sections of the VTA. The density of connections was greater on the Ppp1r1b neurons than for the Rspo2 ones.

The circuit tracing showed that dopamine is available to amygdala neurons that encode fear and its extinction, but do those neurons care about dopamine? The team showed that indeed they express “D1” receptors for the neuromodulator. Commensurate with the degree of dopamine connectivity, Ppp1r1b cells had more receptors than Rspo2 neurons.

Dopamine does a lot of things, so the next question was whether its activity in the amygdala actually correlated with fear encoding and extinction. Using a method to track and visualize it in the brain, the team watched dopamine in the amygdala as mice underwent a three-day experiment. On Day One, they went to an enclosure where they experienced three mild shocks on the feet. On Day Two, they went back to the enclosure for 45 minutes, where they didn’t experience any new shocks — at first, the mice froze in anticipation of a shock, but then relaxed after about 15 minutes. On Day Three they returned again to test whether they had indeed extinguished the fear they showed at the beginning of Day Two.

The dopamine activity tracking revealed that during the shocks on Day One, Rspo2 neurons had the larger response to dopamine, but in the early moments of Day Two, when the anticipated shocks didn’t come and the mice eased up on freezing, the Ppp1r1b neurons showed the stronger dopamine activity. More strikingly, the mice that learned to extinguish their fear most strongly also showed the greatest dopamine signal at those neurons.

Causal connections

The final sets of experiments sought to show that dopamine is not just available and associated with fear encoding and extinction, but also actually causes them. In one set, they turned to optogenetics, a technology that enables scientists to activate or quiet neurons with different colors of light. Sure enough, when they quieted VTA dopaminergic inputs in the pBLA, doing so impaired fear extinction. When they activated those inputs, it accelerated fear extinction. The researchers were surprised that when they activated VTA dopaminergic inputs into the aBLA they could reinstate fear even without any new foot shocks, impairing fear extinction.

The other way they confirmed a causal role for dopamine in fear encoding and extinction was to manipulate the amygdala neurons’ dopamine receptors. In Ppp1r1b neurons, over-expressing dopamine receptors impaired fear recall and promoted extinction, whereas knocking the receptors down impaired fear extinction. Meanwhile in the Rspo2 cells, knocking down receptors reduced the freezing behavior.

“We showed that fear extinction requires VTA dopaminergic activity in the pBLA Ppp1r1b neurons by using optogenetic inhibition of VTA terminals and cell-type-specific knockdown of D1 receptors in these neurons,” the authors wrote.

The scientists are careful in the study to note that while they’ve identified the “teaching signal” for fear extinction learning, the broader phenomenon of fear extinction occurs brainwide, rather than in just this single circuit.

But the circuit seems to be a key node to consider as drug developers and psychiatrists work to combat anxiety and PTSD, Pignatelli di Spinazzola says.

“Fear learning and fear extinction provide a strong framework to study generalized anxiety and PTSD,” he says. “Our study investigates the underlying mechanisms suggesting multiple targets for a translational approach, such as pBLA and use of dopaminergic modulation.”

Marianna Rizzo is also a co-author of the study. Support for the research came from the RIKEN Center for Brain Science, the HHMI, the Freedom Together Foundation, and The Picower Institute.


Using AI to explore the 3D structure of the genome

Two meters of DNA is crammed into the nucleus of every human cell. Bin Zhang wants to know how gene expression works in this minuscule space.


Inside every human cell, 2 meters of DNA is crammed into a nucleus that is only one-hundredth of a millimeter in diameter.

To fit inside that tiny space, the genome must fold into a complex structure known as chromatin, made up of DNA and proteins. The structure of that chromatin, in turn, helps to determine which of the genes will be expressed in a given cell. Neurons, skin cells, and immune cells each express different genes depending on which of their genes are accessible to be transcribed.

Deciphering those structures experimentally is a time-consuming process, making it difficult to compare the 3D genome structures found in different cell types. MIT Professor Bin Zhang is taking a computational approach to this challenge, using computer simulations and generative artificial intelligence to determine these structures.

“Regulation of gene expression relies on the 3D genome structure, so the hope is that if we can fully understand those structures, then we could understand where this cellular diversity comes from,” says Zhang, an associate professor of chemistry.

From the farm to the lab

Zhang first became interested in chemistry when his brother, who was four years older, bought some lab equipment and started performing experiments at home.

“He would bring test tubes and some reagents home and do the experiment there. I didn’t really know what he was doing back then, but I was really fascinated with all the bright colors and the smoke and the odors that could come from the reactions. That really captivated my attention,” Zhang says.

His brother later became the first person from Zhang’s rural village to go to college. That was the first time Zhang had an inkling that it might be possible to pursue a future other than following in the footsteps of his parents, who were farmers in China’s Anhui province.

“Growing up, I would have never imagined doing science or working as a faculty member in America,” Zhang says. “When my brother went to college, that really opened up my perspective, and I realized I didn’t have to follow my parents’ path and become a farmer. That led me to think that I could go to college and study more chemistry.”

Zhang attended the University of Science and Technology in Hefei, China, where he majored in chemical physics. He enjoyed his studies and discovered computational chemistry and computational research, which became his new fascination.

“Computational chemistry combines chemistry with other subjects I love — math and physics — and brings a sense of rigor and reasoning to the otherwise more empirical rules,” he says. “I could use programming to solve interesting chemistry problems and test my own ideas very quickly.”

After graduating from college, he decided to continue his studies in the United States, which he recalled thinking was “the pinnacle of academics.” At Caltech, he worked with Thomas Miller, a professor of chemistry who used computational methods to understand molecular processes such as protein folding.

For Zhang’s PhD research, he studied a transmembrane protein that acts as a channel to allow other proteins to pass through the cell membrane. This protein, called translocon, can also open a side gate within the membrane, so that proteins that are meant to be embedded in the membrane can exit directly into the membrane.

“It’s really a remarkable protein, but it wasn’t clear how it worked,” Zhang says. “I built a computational model to understand the molecular mechanisms that dictate what are the molecular features that allow certain proteins to go into the membrane, while other proteins get secreted.”

Turning to the genome

After finishing grad school, Zhang’s research focus shifted from proteins to the genome. At Rice University, he did a postdoc with Peter Wolynes, a professor of chemistry who had made many key discoveries in the dynamics of protein folding. Around the time that Zhang joined the lab, Wolynes turned his attention to the structure of the genome, and Zhang decided to do the same.

Unlike proteins, which tend to have highly structured regions that can be studied using X-ray crystallography or cryo-EM, DNA is a very globular molecule that doesn’t lend itself to those types of analysis.

A few years earlier, in 2009, researchers at the Broad Institute, the University of Massachusetts Medical School, MIT, and Harvard University had developed a technique for studying the genome’s structure by cross-linking DNA in a cell’s nucleus. Researchers can then determine which segments are located near each other by shredding the DNA into many tiny pieces and sequencing it.

Zhang and Wolynes used data generated by this technique, known as Hi-C, to explore the question of whether DNA forms knots when it’s condensed in the nucleus, similar to how a strand of Christmas lights may become tangled when crammed into a box for storage.

“If DNA was just like a regular polymer, you would expect that it will become tangled and form knots. But that could be very detrimental for biology, because the genome is not just sitting there passively. It has to go through cell division, and also all this molecular machinery has to interact with the genome and transcribe it into RNA, and having knots will create a lot of unnecessary barriers,” Zhang says.

They found that, unlike Christmas lights, DNA does not form any knots even when packed into the cell nucleus, and they built a computational model allowing them to test hypotheses for how the genome is able to avoid those entanglements.

Since joining the MIT faculty in 2016, Zhang has continued developing models of how the genome behaves in 3D space, using molecular dynamic simulations. In one area of research, his lab is studying how differences between the genome structures of neurons and other brain cells give rise to their unique functions, and they are also exploring how misfolding of the genome may lead to diseases such as Alzheimer’s.

When it comes to connecting genome structure and function, Zhang believes that generative AI methods will also be essential. In a recent study, he and his students reported a new computational model, ChromoGen, that uses generative AI to predict the 3D structures of genomic regions, based on their DNA sequences.

“I think that in the future, we will have both components: generative AI and also theoretical chemistry-based approaches,” he says. “They nicely complement each other and allow us to both build accurate 3D structures and understand how those structures arise from the underlying physical forces.” 


New molecular label could lead to simpler, faster tuberculosis tests

MIT chemists found a way to identify a complex sugar molecule in the cell walls of Mycobacterium tuberculosis, the world’s deadliest pathogen.


Tuberculosis, the world’s deadliest infectious disease, is estimated to infect around 10 million people each year, and kills more than 1 million annually. Once established in the lungs, the bacteria’s thick cell wall helps it to fight off the host immune system.

Much of that cell wall is made from complex sugar molecules known as glycans, but it’s not well-understood how those glycans help to defend the bacteria. One reason for that is that there hasn’t been an easy way to label them inside cells.

MIT chemists have now overcome that obstacle, demonstrating that they can label a glycan called ManLAM using an organic molecule that reacts with specific sulfur-containing sugars. These sugars are found in only three bacterial species, the most notorious and prevalent of which is Mycobacterium tuberculosis, the microbe that causes TB.

After labeling the glycan, the researchers were able to visualize where it is located within the bacterial cell wall, and to study what happens to it throughout the first few days of tuberculosis infection of host immune cells.

The researchers now hope to use this approach to develop a diagnostic that could detect TB-associated glycans, either in culture or in a urine sample, which could offer a cheaper and faster alternative to existing diagnostics. Chest X-rays and molecular diagnostics are very accurate but are not always available in developing nations where TB rates are high. In those countries, TB is often diagnosed by culturing microbes from a sputum sample, but that test has a high false negative rate, and it can be difficult for some patients, especially children, to provide a sputum sample. This test also requires many weeks for the bacteria to grow, delaying diagnosis.

“There aren’t a lot of good diagnostic options, and there are some patient populations, including children, who have a hard time giving samples that can be analyzed. There’s a lot of impetus to develop very simple, fast tests,” says Laura Kiessling, the Novartis Professor of Chemistry at MIT and the senior author of the study.

MIT graduate student Stephanie Smelyansky is the lead author of the paper, which appears this week in the Proceedings of the National Academy of Sciences. Other authors include Chi-Wang Ma, an MIT postdoc; Victoria Marando PhD ’23; Gregory Babunovic, a postdoc at the Harvard T.H. Chan School of Public Health; So Young Lee, an MIT graduate student; and Bryan Bryson, an associate professor of biological engineering at MIT.

Labeling glycans

Glycans are found on the surfaces of most cells, where they perform critical functions such as mediating communication between cells.In bacteria, glycans help the microbes to enter host cells, and they also appear to communicate with the host immune system, in some cases blocking the immune response.

Mycobacterium tuberculosis has a really elaborate cell envelope compared to other bacteria, and it’s a rich structure that’s composed of a lot of different glycans,” Smelyansky says. “Something that’s often underappreciated is the fact that these glycans can also interact with our host cells. When our immune cells recognize these glycans, instead of sending out a danger signal, it can send the opposite message, that there’s no danger.”

Glycans are notoriously difficult to tag with any kind of probe, because unlike proteins or DNA, they don’t have distinctive sequences or chemical reactivities that can be targeted. And unlike proteins, they are not genetically encoded, so cells can’t be genetically engineered to produce sugars labeled with fluorescent tags such as green fluorescent protein.

One of the key glycans in M. tuberculosis, known as ManLAM, contains a rare sugar known as MTX, which is unusual in that it has a thioether — a sulfur atom sandwiched between two carbon atoms. This chemical group presented an opportunity to use a small-molecule tag that had been previously developed for labeling methionine, an  amino acid that contains a similar group.

The researchers showed that they could use this tag, known as an oxaziridine, to label ManLAM in M. tuberculosis. The researchers linked the oxaziridine to a fluorescent probe and showed that in M. tuberculosis, this tag showed up in the outer layer of the cell wall. When the researchers exposed the label to Mycobacterium smegmatis, a related bacterium that does not cause disease and does not have the sugar MTX, they saw no fluorescent signal.

“This is the first approach that really selectively allows us to visualize one glycan in particular,” Smelyansky says.

Better diagnostics

The researchers also showed that after labeling ManLAM in M. tuberculosis cells, they could track the cells as they infected immune cells called macrophages. Some tuberculosis researchers had hypothesized that the bacterial cells shed ManLAM once inside a host cell, and that those free glycans then interact with the host immune system. However, the MIT team found that the glycan appears to remain in the bacterial cell walls for at least the first few days of infection.

“The bacteria still have their cell walls attached to them. So it may be that some glycan is being released, but the majority of it is retained on the bacterial cell surface, which has never been shown before,” Smelyansky says.

The researchers now plan to use this approach to study what happens to the bacteria following treatment with different antibiotics, or immune stimulation of the macrophages. It could also be used to study in more detail how the bacterial cell wall is assembled, and how ManLAM helps bacteria get into macrophages and other cells.

“Having a handle to follow the bacteria is really valuable, and it will allow you to visualize processes, both in cells and in animal models, that were previously invisible,” Kiessling says.

She also hopes to use this approach to create new diagnostics for tuberculosis. There is currently a diagnostic in development that uses antibodies to detect ManLAM in a urine sample. However, this test only works well in patients with very active cases of TB, especially people who are immunosuppressed because of HIV or other conditions.

Using their small-molecule sensor instead of antibodies, the MIT team hopes to develop a more sensitive test that could detect ManLAM in the urine even when only small quantities are present.

“This is a beautifully elegant approach to selectively label the surface of mycobacteria, enabling real-time monitoring of cell wall dynamics in this important bacterial family. Such investigations will inform the development of novel strategies to diagnose, prevent, and treat mycobacterial disease, most notably tuberculosis, which remains a global health challenge,” says Todd Lowary, a distinguished research fellow at the Institute of Biological Chemistry, Academia Sinica, Taipei Taiwan, who was not involved in the research.

The research was funded by the National Institute of Allergy and Infectious Disease, the National Institutes of Health, the National Science Foundation, and the Croucher Fellowship.


MIT physicists snap the first images of “free-range” atoms

The results will help scientists visualize never-before-seen quantum phenomena in real space.


MIT physicists have captured the first images of individual atoms freely interacting in space. The pictures reveal correlations among the “free-range” particles that until now were predicted but never directly observed. Their findings, appearing today in the journal Physical Review Letters, will help scientists visualize never-before-seen quantum phenomena in real space.

The images were taken using a technique developed by the team that first allows a cloud of atoms to move and interact freely. The researchers then turn on a lattice of light that briefly freezes the atoms in their tracks, and apply finely tuned lasers to quickly illuminate the suspended atoms, creating a picture of their positions before the atoms naturally dissipate.

The physicists applied the technique to visualize clouds of different types of atoms, and snapped a number of imaging firsts. The researchers directly observed atoms known as “bosons,” which bunched up in a quantum phenomenon to form a wave. They also captured atoms known as “fermions” in the act of pairing up in free space — a key mechanism that enables superconductivity.

“We are able to see single atoms in these interesting clouds of atoms and what they are doing in relation to each other, which is beautiful,” says Martin Zwierlein, the Thomas A. Frank Professor of Physics at MIT.

In the same journal issue, two other groups report using similar imaging techniques, including a team led by Nobel laureate Wolfgang Ketterle, the John D. MacArthur Professor of Physics at MIT. Ketterle’s group visualized enhanced pair correlations among bosons, while the other group, from École Normale Supérieure in Paris, led by Tarik Yefsah, imaged a cloud of noninteracting fermions.

The study by Zwierlein and his colleagues is co-authored by MIT graduate students Ruixiao Yao, Sungjae Chi, and Mingxuan Wang, and MIT assistant professor of physics Richard Fletcher.

Inside the cloud

A single atom is about one-tenth of a nanometer in diameter, which is one-millionth of the thickness of a strand of human hair. Unlike hair, atoms behave and interact according to the rules of quantum mechanics; it is their quantum nature that makes atoms difficult to understand. For example, we cannot simultaneously know precisely where an atom is and how fast it is moving.

Scientists can apply various methods to image individual atoms, including absorption imaging, where laser light shines onto the atom cloud and casts its shadow onto a camera screen.

“These techniques allow you to see the overall shape and structure of a cloud of atoms, but not the individual atoms themselves,” Zwierlein notes. “It’s like seeing a cloud in the sky, but not the individual water molecules that make up the cloud.”

He and his colleagues took a very different approach in order to directly image atoms interacting in free space. Their technique, called “atom-resolved microscopy,” involves first corralling a cloud of atoms in a loose trap formed by a laser beam. This trap contains the atoms in one place where they can freely interact. The researchers then flash on a lattice of light, which freezes the atoms in their positions. Then, a second laser illuminates the suspended atoms, whose fluorescence reveals their individual positions.

“The hardest part was to gather the light from the atoms without boiling them out of the optical lattice,” Zwierlein says. “You can imagine if you took a flamethrower to these atoms, they would not like that. So, we’ve learned some tricks through the years on how to do this. And it’s the first time we do it in-situ, where we can suddenly freeze the motion of the atoms when they’re strongly interacting, and see them, one after the other. That’s what makes this technique more powerful than what was done before.”

Bunches and pairs

The team applied the imaging technique to directly observe interactions among both bosons and fermions. Photons are an example of a boson, while electrons are a type of fermion. Atoms can be bosons or fermions, depending on their total spin, which is determined by whether the total number of their protons, neutrons, and electrons is even or odd. In general, bosons attract, whereas fermions repel.

Zwierlein and his colleagues first imaged a cloud of bosons made up of sodium atoms. At low temperatures, a cloud of bosons forms what’s known as a Bose-Einstein condensate — a state of matter where all bosons share one and the same quantum state. MIT’s Ketterle was one of the first to produce a Bose-Einstein condensate, of sodium atoms, for which he shared the 2001 Nobel Prize in Physics.

Zwierlein’s group now is able to image the individual sodium atoms within the cloud, to observe their quantum interactions. It has long been predicted that bosons should “bunch” together, having an increased probability to be near each other. This bunching is a direct consequence of their ability to share one and the same quantum mechanical wave. This wave-like character was first predicted by physicist Louis de Broglie. It is the “de Broglie wave” hypothesis that in part sparked the beginning of modern quantum mechanics.

“We understand so much more about the world from this wave-like nature,” Zwierlein says. “But it’s really tough to observe these quantum, wave-like effects. However, in our new microscope, we can visualize this wave directly.”

In their imaging experiments, the MIT team were able to see, for the first time in situ, bosons bunch together as they shared one quantum, correlated de Broglie wave. The team also imaged a cloud of two types of lithium atoms. Each type of atom is a fermion, that naturally repels its own kind, but that can strongly interact with other particular fermion types. As they imaged the cloud, the researchers observed that indeed, the opposite fermion types did interact, and formed fermion pairs — a coupling that they could directly see for the first time.

“This kind of pairing is the basis of a mathematical construction people came up with to explain experiments. But when you see pictures like these, it’s showing in a photograph, an object that was discovered in the mathematical world,” says study co-author Richard Fletcher. “So it’s a very nice reminder that physics is about physical things. It’s real.”

Going forward, the team will apply their imaging technique to visualize more exotic and less understood phenomena, such as “quantum Hall physics” — situations when interacting electrons display novel correlated behaviors in the presence of a magnetic field.

“That’s where theory gets really hairy — where people start drawing pictures instead of being able to write down a full-fledged theory because they can’t fully solve it,” Zwierlein says. “Now we can verify whether these cartoons of quantum Hall states are actually real. Because they are pretty bizarre states.”

This work was supported, in part, by National Science Foundation through the MIT-Harvard Center for Ultracold Atoms, as well as by the Air Force Office of Scientific Research, the Army Research Office, the Department of Energy, the Defense Advanced Projects Research Agency, a Vannevar Bush Faculty Fellowship, and the David and Lucile Packard Foundation.


In kids, EEG monitoring of consciousness safely reduces anesthetic use

Clinical trial finds several outcomes improved for young children when an anesthesiologist observed their brain waves to guide dosing of sevoflurane during surgery.


Newly published results of a randomized, controlled clinical trial in Japan among more than 170 children aged 1 to 6 who underwent surgery show that by using electroencephalogram (EEG) readings of brain waves to monitor unconsciousness, an anesthesiologist can significantly reduce the amount of the anesthesia administered to safely induce and sustain each patient’s anesthetized state. On average, the little patients experienced significant improvements in several post-operative outcomes, including quicker recovery and reduced incidence of delirium.

“I think the main takeaway is that in kids, using the EEG, we can reduce the amount of anesthesia we give them and maintain the same level of unconsciousness,” says study co-author Emery N. Brown, the Edward Hood Taplin Professor of Medical Engineering and Computational Neuroscience at MIT, an anesthesiologist at Massachusetts General Hospital, and a professor at Harvard Medical School. The study appeared April 21 in JAMA Pediatrics.

Yasuko Nagasaka, chair of anesthesiology at Tokyo Women’s Medical University and a former colleague of Brown’s in the United States, designed the study. She asked Brown to train and advise lead author Kiyoyuki Miyasaka of St. Luke’s International Hospital in Tokyo on how to use EEG to monitor unconsciousness and adjust anesthesia dosing in children. Miyasaka then served as the anesthesiologist for all patients in the trial. Attending anesthesiologists not involved in the study were always on hand to supervise.

Brown’s research in The Picower Institute for Learning and Memory, the Institute for Medical Engineering and Science, and the Department of Brain and Cognitive Sciences at MIT has shown that a person’s level of consciousness under any particular anesthetic drug is discernible from patterns of their brain waves. Each child’s brain waves were measured with EEG, but in the control group Miyasaka adhered to standard anesthesia dosing protocols while in the experimental group he used the EEG measures as a guide for dosing. The results show that when he used EEG, he was able to induce the desired level of unconsciousness with a concentration of 2 percent sevoflurane gas, rather than the standard 5 percent. Maintenance of unconsciousness, meanwhile, only turned out to require 0.9 percent concentration, rather than the standard 2.5 percent.

Meanwhile, a separate researcher, blinded to whether EEG or standard protocols were used, assessed the kids for “pediatric anesthesia emergence delirium” (PAED), in which children sometimes wake up from anesthesia with a set of side effects including lack of eye contact, inconsolability, unawareness of surroundings, restlessness, and non-purposeful movements. Children who received standard anesthesia dosing met the threshold for PAED in 35 percent of cases (30 out of 86), while children who received EEG-guided dosing met the threshold in 21 percent of cases (19 out of 91). The difference of 14 percentage points was statistically significant.

Meanwhile, the authors reported that, on average, EEG-guided patients had breathing tubes removed 3.3 minutes earlier, emerged from anesthesia 21.4 minutes earlier, and were discharged from post-acute care 16.5 minutes earlier than patients who received anesthesia according to the standard protocol. All of these differences were statistically significant. Also, no child in the study ever became aware during surgery.

The authors noted that the quicker recovery among patients who received EEG-guided anesthesia was not only better medically, but also reduced health-care costs. Time in post-acute care in the United States costs about $46 a minute, so the average reduced time of 16.5 minutes would save about $750 per case. Sevoflurane is also a potent greenhouse gas, Brown notes, so reducing its use is better for the environment.

In the study, the authors also present comparisons of the EEG recordings from children in the control and experimental groups. There are notable differences in the “spectrograms” that charted the power of individual brain wave frequencies both as children were undergoing surgery and while they were approaching emergence from anesthesia, Brown says.

For instance, among children who received EEG-guided dosing, there are well-defined bands of high power at about 1-3 Hertz and 10-12 Hz. In children who received standard protocol dosing, the entire range of frequencies up to about 15 Hz are at high power. In another example, children who experienced PAED showed higher power at several frequencies up to 30Hz than children who did not experience PAED.

The findings further validate the idea that monitoring brain waves during surgery can provide anesthesiologists with actionable guidance to improve patient care, Brown says. Training in reading EEGs and guiding dosing can readily be integrated in the continuing medical education practices of hospitals, he adds.

In addition to Miyasuka, Brown, and Nagasaka, Yasuyuki Suzuki is a study co-author.

Funding sources for the study include the MIT-Massachusetts General Brigham Brain Arousal State Control Innovation Center, the Freedom Together Foundation, and the Picower Institute.


Lighting up biology’s basement lab

Senior Technical Instructor Vanessa Cheung ’02 brings the energy, experience, and excitement needed to educate students in the biology teaching lab.


For more than 30 years, Course 7 (Biology) students have descended to the expansive, windowless basement of Building 68 to learn practical skills that are the centerpiece of undergraduate biology education at the Institute. The lines of benches and cabinets of supplies that make up the underground MIT Biology Teaching Lab could easily feel dark and isolated. 

In the corner of this room, however, sits Senior Technical Instructor Vanessa Cheung ’02, who manages to make the space seem sunny and communal.

“We joke that we could rig up a system of mirrors to get just enough daylight to bounce down from the stairwell,” Cheung says with a laugh. “It is a basement, but I am very lucky to have this teaching lab space. It is huge and has everything we need.”

This optimism and gratitude fostered by Cheung is critical, as MIT undergrad students enrolled in classes 7.002 (Fundamentals of Experimental Molecular Biology) and 7.003 (Applied Molecular Biology Laboratory) spend four-hour blocks in the lab each week, learning the foundations of laboratory technique and theory for biological research from Cheung and her colleagues.

Running toward science education

Cheung’s love for biology can be traced back to her high school cross country and track coach, who also served as her second-year biology teacher. The sport and the fundamental biological processes she was learning about in the classroom were, in fact, closely intertwined. 

“He told us about how things like ATP [adenosine triphosphate] and the energy cycle would affect our running,” she says. “Being able to see that connection really helped my interest in the subject.”

That inspiration carried her through a move from her hometown of Pittsburgh, Pennsylvania, to Cambridge, Massachusetts, to pursue an undergraduate degree at MIT, and through her thesis work to earn a PhD in genetics at Harvard Medical School. She didn’t leave running behind either: To this day, she can often be found on the Charles River Esplanade, training for her next marathon. 

She discovered her love of teaching during her PhD program. She enjoyed guiding students so much that she spent an extra semester as a teaching assistant, outside of the one required for her program. 

“I love research, but I also really love telling people about research,” Cheung says.

Cheung herself describes lab instruction as the “best of both worlds,” enabling her to pursue her love of teaching while spending every day at the bench, doing experiments. She emphasizes for students the importance of being able not just to do the hands-on technical lab work, but also to understand the theory behind it.

“The students can tend to get hung up on the physical doing of things — they are really concerned when their experiments don’t work,” she says. “We focus on teaching students how to think about being in a lab — how to design an experiment and how to analyze the data.”

Although her talent for teaching and passion for science led her to the role, Cheung doesn’t hesitate to identify the students as her favorite part of the job. 

“It sounds cheesy, but they really do keep the job very exciting,” she says.

Using mind and hand in the lab

Cheung is the type of person who lights up when describing how much she “loves working with yeast.” 

“I always tell the students that maybe no one cares about yeast except me and like three other people in the world, but it is a model organism that we can use to apply what we learn to humans,” Cheung explains.

Though mastering basic lab skills can make hands-on laboratory courses feel “a bit cookbook,” Cheung is able to get the students excited with her enthusiasm and clever curriculum design. 

“The students like things where they can get their own unique results, and things where they have a little bit of freedom to design their own experiments,” she says. So, the lab curriculum incorporates opportunities for students to do things like identify their own unique yeast mutants and design their own questions to test in a chemical engineering module.

Part of what makes theory as critical as technique is that new tools and discoveries are made frequently in biology, especially at MIT. For example, there has been a shift from a focus on RNAi to CRISPR as a popular lab technique in recent years, and Cheung muses that CRISPR itself may be overshadowed within only a few more years — keeping students learning at the cutting edge of biology is always on Cheung’s mind. 

“Vanessa is the heart, soul, and mind of the biology lab courses here at MIT, embodying ‘mens et manus’ [‘mind and hand’],” says technical lab instructor and Biology Teaching Lab Manager Anthony Fuccione. 

Support for all students

Cheung’s ability to mentor and guide students earned her a School of Science Dean’s Education and Advising Award in 2012, but her focus isn’t solely on MIT undergraduate students. 

In fact, according to Cheung, the earlier students can be exposed to science, the better. In addition to her regular duties, Cheung also designs curriculum and teaches in the LEAH Knox Scholars Program. The two-year program provides lab experience and mentorship for low-income Boston- and Cambridge-area high school students. 

Paloma Sanchez-Jauregui, outreach programs coordinator who works with Cheung on the program, says Cheung has a standout “growth mindset” that students really appreciate.

“Vanessa teaches students that challenges — like unexpected PCR results — are part of the learning process,” Sanchez-Jauregui says. “Students feel comfortable approaching her for help troubleshooting experiments or exploring new topics.”

Cheung’s colleagues report that they admire not only her talents, but also her focus on supporting those around her. Technical Instructor and colleague Eric Chu says Cheung “offers a lot of help to me and others, including those outside of the department, but does not expect reciprocity.”

Professor of biology and co-director of the Department of Biology undergraduate program Adam Martin says he “rarely has to worry about what is going on in the teaching lab.” According to Martin, Cheung is ”flexible, hard-working, dedicated, and resilient, all while being kind and supportive to our students. She is a joy to work with.” 


Response to infection highlights the nervous system’s surprising degrees of flexibility

Upon infection, the C. elegans worm reshuffles the roles of brain cells and flips the functions of some of the chemicals it uses to regulate behavior.


Whether you are a person about town or a worm in a dish, life can throw all kinds of circumstances your way. What you need is a nervous system flexible enough to cope. In a new study, MIT neuroscientists show how even a simple animal can repurpose brain circuits and the chemical signals, or “neuromodulators,” in its brain to muster an adaptive response to an infection. The study therefore may provide a model for understanding how brains in more complex organisms, including ourselves, manage to use what they have to cope with shifting internal states. 

“Neuromodulators play pivotal roles in coupling changes in animals’ internal states to their behavior,” the scientists write in their paper, recently published in Nature Communications. “How combinations of neuromodulators released from different neuronal sources control the diverse internal states that animals exhibit remains an open question.”

When C. elegans worms fed on infectious Pseudomonas bacteria, they ate less and became more lethargic. When the researchers looked across the nervous system to see how that behavior happened, they discovered that the worm had completely revamped the roles of several of its 302 neurons and some of the peptides they secrete across the brain to modulate behavior. Systems that responded to stress in one case or satiety in another became reconfigured to cope with the infection.

“This is a question of, how do you adapt to your environment with the highest level of flexibility given the set of neurons and neuromodulators you have,” says postdoc Sreeparna Pradhan, co-lead author of the new study in Nature Communications. “How do you make the maximum set of options available to you?”

The research to find out took place in the lab of senior author Steve Flavell, an associate professor in The Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences and an investigator of the Howard Hughes Medical Institute. Pradhan, who was supported by a fellowship from MIT’s K. Lisa Yang Brain-Body Center during the work, teamed up with former Flavell Lab graduate student Gurrein Madan to lead the research.

Pradhan says the team discovered several surprises in the course of the study, including that a neuropeptide called FLP-13 completely flipped its function in infected animals versus animals experiencing other forms of stress. Previous research had shown that when worms are stressed by heat, a neuron called ALA releases FLP-13 to cause the worms to go into quiescence, a sleep-like state. But when the worms in the new study ate Pseudomonas bacteria, a band of other neurons released FLP-13 to fight off quiescence, enabling the worms to survive longer. Meanwhile, ALA took on a completely different role during sickness: leading the charge to suppress feeding by emitting a different group of peptides.

A comprehensive approach

To understand how the worms responded to infection, the team tracked many features of the worms’ behavior for days and made genetic manipulations to probe the underlying mechanisms at play. They also recorded activity across the worms' whole brains. This kind of a comprehensive observation and experimentation is difficult to achieve in more complex animals, but C. elegans’ relative simplicity makes it a tractable testbed, Pradhan says. The team’s approach also is what allowed it to make so many unexpected findings.

For instance, Pradhan didn’t suspect that the ALA neuron would turn out to be the neuron that suppressed feeding, but when she observed their behavior for long enough, she started to realize the reduced feeding arose from the worms taking little breaks that they wouldn’t normally take. As she and Madan were manipulating more than a dozen genes they thought might be affecting behavior and feeding in the worm, she included another called ceh-17 that she had read about years ago that seemed to promote bouts of “microsleep” in the worms. When they knocked out ceh-17, they found that those worms didn’t reduce feeding when they got infected, unlike normal animals. It just so happens that ceh-17 is specifically needed for ALA to function properly, so that’s when the team realized ALA might be involved in the feeding-reduction behavior.

To know for sure, they then knocked out the various peptides that ALA releases and saw that when they knocked out three in particular, flp-24, nlp-8 and flp-7, infected worms didn’t exhibit reduced feeding upon infection. That clinched that ALA drives the reduced feeding behavior by emitting those three peptides.

Meanwhile, Pradhan and Madan’s screens also revealed that when infected worms were missing flp-13, they would go into a quiescence state much sooner than infected worms with the peptide available. Notably, the worms that fought off the quiescence state lived longer. They found that fighting off quiescence depended on the FLP-13 coming from four neurons (I5, I1, ASH and OLL), but not from ALA. Further experiments showed that FLP-13 acted on a widespread neuropeptide receptor called DMSR-1 to prevent quiescence.

Having a little nap

The last major surprise of the study was that the quiescence that Pseudomonas infection induces in worms is not the same as other forms of sleepiness that show up in other contexts, such as after satiety or heat stress. In those cases, worms don’t wake easily (with a little poke), but amid infection their quiescence was readily reversible. It seemed more like lethargy than sleep. Using the lab’s ability image all neural activity during behavior, Pradhan and Madan discerned that a neuron called ASI was particularly active during the bouts of lethargy. That observation solidified further when they showed that ASI’s secretion of the peptide DAF-7 was required for the quiescence to emerge in infected animals.

In all, the study showed that the worms repurpose and reconfigure — sometimes to the point of completely reversing — the functions of neurons and peptides to mount an adaptive response to infection, versus a different problem like stress. The results therefore shed light on what has been a tricky question to resolve. How do brains use their repertoire of cells, circuits, and neuromodulators to deal with what life hands them? At least part of the answer seems to be by reshuffling existing components, rather than creating unique ones for each situation.

“The states of stress, satiety, and infection are not induced by unique sets of neuromodulators," the authors wrote in their paper. "Instead, one larger set of neuromodulators may be deployed from different sources and in different combinations to specify these different internal states.”

In addition to Pradhan, Madan, and Flavell, the paper’s other authors are Di Kang, Eric Bueno, Adam Atanas, Talya Kramer, Ugur Dag, Jessica Lage, Matthew Gomes, Alicia Kun-Yang Lu, and Jungyeon Park.

Support for the research came from the the Picower Institute, the Freedom Together Foundation, the K. Lisa Yang Brain-Body Center, and the Yang Tan Collective at MIT; the National Institutes of Health; the McKnight Foundation; the Alfred P. Sloan Foundation; and the Howard Hughes Medical Institute.


A new computational framework illuminates the hidden ecology of diseased tissues

The MESA method uses ecological theory to map cellular diversity and spatial patterns in tissues, offering new insights into disease progression.


To understand what drives disease progression in tissues, scientists need more than just a snapshot of cells in isolation — they need to see where the cells are, how they interact, and how that spatial organization shifts across disease states. A new computational method called MESA (Multiomics and Ecological Spatial Analysis), detailed in a study published in Nature Genetics, is helping researchers study diseased tissues in more meaningful ways.

The work details the results of a collaboration between researchers from MIT, Stanford University, Weill Cornell Medicine, the Ragon Institute of MGH, MIT, and Harvard, and the Broad Institute of MIT and Harvard, and was led by the Stanford team.

MESA brings an ecology-inspired lens to tissue analysis. It offers a pipeline to interpret spatial omics data — the product of cutting-edge technology that captures molecular information along with the location of cells in tissue samples. These data provide a high-resolution map of tissue “neighborhoods,” and MESA helps make sense of the structure of that map.

“By integrating approaches from traditionally distinct disciplines, MESA enables researchers to better appreciate how tissues are locally organized and how that organization changes in different disease contexts, powering new diagnostics and the identification of new targets for preventions and cures,” says Alex K. Shalek, the director of the Institute for Medical Engineering and Science (IMES), the J. W. Kieckhefer Professor in IMES and the Department of Chemistry, and an extramural member of the Koch Institute for Integrative Cancer Research at MIT, as well as an institute member of the Broad Institute and a member of the Ragon Institute.

“In ecology, people study biodiversity across regions — how animal species are distributed and interact,” explains Bokai Zhu, MIT postdoc and author on the study. “We realized we could apply those same ideas to cells in tissues. Instead of rabbits and snakes, we analyze T cells and B cells.”

By treating cell types like ecological species, MESA quantifies “biodiversity” within tissues and tracks how that diversity changes in disease. For example, in liver cancer samples, the method revealed zones where tumor cells consistently co-occurred with macrophages, suggesting these regions may drive unique disease outcomes.

“Our method reads tissues like ecosystems, uncovering cellular ‘hotspots’ that mark early signs of disease or treatment response,” Zhu adds. “This opens new possibilities for precision diagnostics and therapy design.”

MESA also offers another major advantage: It can computationally enrich tissue data without the need for more experiments. Using publicly available single-cell datasets, the tool transfers additional information — such as gene expression profiles — onto existing tissue samples. This approach deepens understanding of how spatial domains function, especially when comparing healthy and diseased tissue.

In tests across multiple datasets and tissue types, MESA uncovered spatial structures and key cell populations that were previously overlooked. It integrates different types of omics data, such as transcriptomics and proteomics, and builds a multilayered view of tissue architecture.

Currently available as a Python package, MESA is designed for academic and translational research. Although spatial omics is still too resource-intensive for routine in-hospital clinical use, the technology is gaining traction among pharmaceutical companies, particularly for drug trials where understanding tissue responses is critical.

“This is just the beginning,” says Zhu. “MESA opens the door to using ecological theory to unravel the spatial complexity of disease — and ultimately, to better predict and treat it.”


The chemistry of creativity

Senior Madison Wang blends science, history, and art to probe how the world works and the tools we use to explore and understand it.


Senior Madison Wang, a double major in creative writing and chemistry, developed her passion for writing in middle school. Her interest in chemistry fit nicely alongside her commitment to producing engaging narratives. 

Wang believes that world-building in stories supported by science and research can make for a more immersive reader experience.

“In science and in writing, you have to tell an effective story,” she says. “People respond well to stories.”  

A native of Buffalo, New York, Wang applied early action for admission to MIT and learned quickly that the Institute was where she wanted to be. “It was a really good fit,” she says. “There was positive energy and vibes, and I had a great feeling overall.”

The power of science and good storytelling

“Chemistry is practical, complex, and interesting,” says Wang. “It’s about quantifying natural laws and understanding how reality works.”

Chemistry and writing both help us “see the world’s irregularity,” she continues. Together, they can erase the artificial and arbitrary line separating one from the other and work in concert to tell a more complete story about the world, the ways in which we participate in building it, and how people and objects exist in and move through it. 

“Understanding magnetism, material properties, and believing in the power of magic in a good story … these are why we’re drawn to explore,” she says. “Chemistry describes why things are the way they are, and I use it for world-building in my creative writing.”

Wang lauds MIT’s creative writing program and cites a course she took with Comparative Media Studies/Writing Professor and Pulitzer Prize winner Junot Díaz as an affirmation of her choice. Seeing and understanding the world through the eyes of a scientist — its building blocks, the ways the pieces fit and function together — help explain her passion for chemistry, especially inorganic and physical chemistry.

Wang cites the work of authors like Sam Kean and Knight Science Journalism Program Director Deborah Blum as part of her inspiration to study science. The books “The Disappearing Spoon” by Kean and “The Poisoner’s Handbook” by Blum “both present historical perspectives, opting for a story style to discuss the events and people involved,” she says. “They each put a lot of work into bridging the gap between what can sometimes be sterile science and an effective narrative that gets people to care about why the science matters.”

Genres like fantasy and science fiction are complementary, according to Wang. “Constructing an effective world means ensuring readers understand characters’ motivations — the ‘why’ — and ensuring it makes sense,” she says. “It’s also important to show how actions and their consequences influence and motivate characters.” 

As she explores the world’s building blocks inside and outside the classroom, Wang works to navigate multiple genres in her writing, as with her studies in chemistry. “I like romance and horror, too,” she says. “I have gripes with committing to a single genre, so I just take whatever I like from each and put them in my stories.”

In chemistry, Wang favors an environment in which scientists can regularly test their ideas. “It’s important to ground chemistry in the real world to create connections for students,” she argues. Advancements in the field have occurred, she notes, because scientists could exit the realm of theory and apply ideas practically.

“Fritz Haber’s work on ammonia synthesis revolutionized approaches to food supply chains,” she says, referring to the German chemist and Nobel laureate. “Converting nitrogen and hydrogen gas to ammonia for fertilizer marked a dramatic shift in how farming could work.” This kind of work could only result from the consistent, controlled, practical application of the theories scientists consider in laboratory environments.

A future built on collaboration and cooperation

Watching the world change dramatically and seeing humanity struggle to grapple with the implications of phenomena like climate change, political unrest, and shifting alliances, Wang emphasizes the importance of deconstructing silos in academia and the workplace. Technology can be a tool for harm, she notes, so inviting more people inside previously segregated spaces helps everyone.

Criticism in both chemistry and writing, Wang believes, are valuable tools for continuous improvement. Effective communication, explaining complex concepts, and partnering to develop long-term solutions are invaluable when working at the intersection of history, art, and science. In writing, Wang says, criticism can help define areas to improve writers’ stories and shape interesting ideas.

“We’ve seen the positive results that can occur with effective science writing, which requires rigor and fact-checking,” she says. “MIT’s cross-disciplinary approach to our studies, alongside feedback from teachers and peers, is a great set of tools to carry with us regardless of where we are.”

Wang explores connections between science and stories in her leisure time, too. “I’m a member of MIT’s Anime Club and I enjoy participating in MIT’s Sport Taekwondo Club,” she says. The competitive aspect in tae kwon do allows for her to feed her competitive drive and gets her out of her head. Her participation in DAAMIT (Digital Art and Animation at MIT) creates connections with different groups of people and gives her ideas she can use to tell better stories. “It’s fascinating exploring others’ minds,” she says.

Wang argues that there’s a false divide between science and the humanities and wants the work she does after graduation to bridge that divide. “Writing and learning about science can help,” she asserts. “Fields like conservation and history allow for continued exploration of that intersection.”

Ultimately, Wang believes it’s important to examine narratives carefully and to question notions of science’s inherent superiority over humanities fields. “The humanities and science have equal value,” she says.


New model predicts a chemical reaction’s point of no return

Chemists could use this quick computational method to design more efficient reactions that yield useful compounds, from fuels to pharmaceuticals.


When chemists design new chemical reactions, one useful piece of information involves the reaction’s transition state — the point of no return from which a reaction must proceed.

This information allows chemists to try to produce the right conditions that will allow the desired reaction to occur. However, current methods for predicting the transition state and the path that a chemical reaction will take are complicated and require a huge amount of computational power.

MIT researchers have now developed a machine-learning model that can make these predictions in less than a second, with high accuracy. Their model could make it easier for chemists to design chemical reactions that could generate a variety of useful compounds, such as pharmaceuticals or fuels.

“We’d like to be able to ultimately design processes to take abundant natural resources and turn them into molecules that we need, such as materials and therapeutic drugs. Computational chemistry is really important for figuring out how to design more sustainable processes to get us from reactants to products,” says Heather Kulik, the Lammot du Pont Professor of Chemical Engineering, a professor of chemistry, and the senior author of the new study.

Former MIT graduate student Chenru Duan PhD ’22, who is now at Deep Principle; former Georgia Tech graduate student Guan-Horng Liu, who is now at Meta; and Cornell University graduate student Yuanqi Du are the lead authors of the paper, which appears today in Nature Machine Intelligence.

Better estimates

For any given chemical reaction to occur, it must go through a transition state, which takes place when it reaches the energy threshold needed for the reaction to proceed. These transition states are so fleeting that they’re nearly impossible to observe experimentally.

As an alternative, researchers can calculate the structures of transition states using techniques based on quantum chemistry. However, that process requires a great deal of computing power and can take hours or days to calculate a single transition state.

“Ideally, we’d like to be able to use computational chemistry to design more sustainable processes, but this computation in itself is a huge use of energy and resources in finding these transition states,” Kulik says.

In 2023, Kulik, Duan, and others reported on a machine-learning strategy that they developed to predict the transition states of reactions. This strategy is faster than using quantum chemistry techniques, but still slower than what would be ideal because it requires the model to generate about 40 structures, then run those predictions through a “confidence model” to predict which states were most likely to occur.

One reason why that model needs to be run so many times is that it uses randomly generated guesses for the starting point of the transition state structure, then performs dozens of calculations until it reaches its final, best guess. These randomly generated starting points may be very far from the actual transition state, which is why so many steps are needed.

The researchers’ new model, React-OT, described in the Nature Machine Intelligence paper, uses a different strategy. In this work, the researchers trained their model to begin from an estimate of the transition state generated by linear interpolation — a technique that estimates each atom’s position by moving it halfway between its position in the reactants and in the products, in three-dimensional space.

“A linear guess is a good starting point for approximating where that transition state will end up,” Kulik says. “What the model’s doing is starting from a much better initial guess than just a completely random guess, as in the prior work.”

Because of this, it takes the model fewer steps and less time to generate a prediction. In the new study, the researchers showed that their model could make predictions with only about five steps, taking about 0.4 seconds. These predictions don’t need to be fed through a confidence model, and they are about 25 percent more accurate than the predictions generated by the previous model.

“That really makes React-OT a practical model that we can directly integrate to the existing computational workflow in high-throughput screening to generate optimal transition state structures,” Duan says.

“A wide array of chemistry”

To create React-OT, the researchers trained it on the same dataset that they used to train their older model. These data contain structures of reactants, products, and transition states, calculated using quantum chemistry methods, for 9,000 different chemical reactions, mostly involving small organic or inorganic molecules.

Once trained, the model performed well on other reactions from this set, which had been held out of the training data. It also performed well on other types of reactions that it hadn’t been trained on, and could make accurate predictions involving reactions with larger reactants, which often have side chains that aren’t directly involved in the reaction.

“This is important because there are a lot of polymerization reactions where you have a big macromolecule, but the reaction is occurring in just one part. Having a model that generalizes across different system sizes means that it can tackle a wide array of chemistry,” Kulik says.

The researchers are now working on training the model so that it can predict transition states for reactions between molecules that include additional elements, including sulfur, phosphorus, chlorine, silicon, and lithium.

“To quickly predict transition state structures is key to all chemical understanding,” says Markus Reiher, a professor of theoretical chemistry at ETH Zurich, who was not involved in the study. “The new approach presented in the paper could very much accelerate our search and optimization processes, bringing us faster to our final result. As a consequence, also less energy will be consumed in these high-performance computing campaigns. Any progress that accelerates this optimization benefits all sorts of computational chemical research.”

The MIT team hopes that other scientists will make use of their approach in designing their own reactions, and have created an app for that purpose.

“Whenever you have a reactant and product, you can put them into the model and it will generate the transition state, from which you can estimate the energy barrier of your intended reaction, and see how likely it is to occur,” Duan says.

The research was funded by the U.S. Army Research Office, the U.S. Department of Defense Basic Research Office, the U.S. Air Force Office of Scientific Research, the National Science Foundation, and the U.S. Office of Naval Research.


Astronomers discover a planet that’s rapidly disintegrating, producing a comet-like tail

The small and rocky lava world sheds an amount of material equivalent to the mass of Mount Everest every 30.5 hours.


MIT astronomers have discovered a planet some 140 light-years from Earth that is rapidly crumbling to pieces.

The disintegrating world is about the mass of Mercury, although it circles about 20 times closer to its star than Mercury does to the sun, completing an orbit every 30.5 hours. At such close proximity to its star, the planet is likely covered in magma that is boiling off into space. As the roasting planet whizzes around its star, it is shedding an enormous amount of surface minerals and effectively evaporating away.

The astronomers spotted the planet using NASA’s Transiting Exoplanet Survey Satellite (TESS), an MIT-led mission that monitors the nearest stars for transits, or periodic dips in starlight that could be signs of orbiting exoplanets. The signal that tipped the astronomers off was a peculiar transit, with a dip that fluctuated in depth every orbit.

The scientists confirmed that the signal is of a tightly orbiting rocky planet that is trailing a long, comet-like tail of debris.

“The extent of the tail is gargantuan, stretching up to 9 million kilometers long, or roughly half of the planet’s entire orbit,” says Marc Hon, a postdoc in MIT’s Kavli Institute for Astrophysics and Space Research.

It appears that the planet is disintegrating at a dramatic rate, shedding an amount of material equivalent to one Mount Everest each time it orbits its star. At this pace, given its small mass, the researchers predict that the planet may completely disintegrate in about 1 million to 2 million years.

“We got lucky with catching it exactly when it’s really going away,” says Avi Shporer, a collaborator on the discovery who is also at the TESS Science Office. “It’s like on its last breath.”

Hon and Shporer, along with their colleagues, have published their results today in the Astrophysical Journal Letters. Their MIT co-authors include Saul Rappaport, Andrew Vanderburg, Jeroen Audenaert, William Fong, Jack Haviland, Katharine Hesse, Daniel Muthukrishna, Glen Petitpas, Ellie Schmelzer, Sara Seager, and George Ricker, along with collaborators from multiple other institutions.

Roasting away

The new planet, which scientists have tagged as BD+05 4868 Ab, was detected almost by happenstance.

“We weren’t looking for this kind of planet,” Hon says. “We were doing the typical planet vetting, and I happened to spot this signal that appeared very unusual.”

The typical signal of an orbiting exoplanet looks like a brief dip in a light curve, which repeats regularly, indicating that a compact body such as a planet is briefly passing in front of, and temporarily blocking, the light from its host star.

This typical pattern was unlike what Hon and his colleagues detected from the host star BD+05 4868 A, located in the constellation of Pegasus. Though a transit appeared every 30.5 hours, the brightness took much longer to return to normal, suggesting a long trailing structure still blocking starlight. Even more intriguing, the depth of the dip changed with each orbit, suggesting that whatever was passing in front of the star wasn’t always the same shape or blocking the same amount of light.

“The shape of the transit is typical of a comet with a long tail,” Hon explains. “Except that it’s unlikely that this tail contains volatile gases and ice as expected from a real comet — these would not survive long at such close proximity to the host star. Mineral grains evaporated from the planetary surface, however, can linger long enough to present such a distinctive tail.”

Given its proximity to its star, the team estimates that the planet is roasting at around 1,600 degrees Celsius, or close to 3,000 degrees Fahrenheit. As the star roasts the planet, any minerals on its surface are likely boiling away and escaping into space, where they cool into a long and dusty tail.

The dramatic demise of this planet is a consequence of its low mass, which is between that of Mercury and the moon. More massive terrestrial planets like the Earth have a stronger gravitational pull and therefore can hold onto their atmospheres. For BD+05 4868 Ab, the researchers suspect there is very little gravity to hold the planet together.

“This is a very tiny object, with very weak gravity, so it easily loses a lot of mass, which then further weakens its gravity, so it loses even more mass,” Shporer explains. “It’s a runaway process, and it’s only getting worse and worse for the planet.”

Mineral trail

Of the nearly 6,000 planets that astronomers have discovered to date, scientists know of only three other disintegrating planets beyond our solar system. Each of these crumbling worlds were spotted over 10 years ago using data from NASA’s Kepler Space Telescope. All three planets were spotted with similar comet-like tails. BD+05 4868 Ab has the longest tail and the deepest transits out of the four known disintegrating planets to date.

“That implies that its evaporation is the most catastrophic, and it will disappear much faster than the other planets,” Hon explains.

The planet’s host star is relatively close, and thus brighter than the stars hosting the other three disintegrating planets, making this system ideal for further observations using NASA’s James Webb Space Telescope (JWST), which can help determine the mineral makeup of the dust tail by identifying which colors of infrared light it absorbs.

This summer, Hon and graduate student Nicholas Tusay from Penn State University will lead observations of BD+05 4868 Ab using JWST. “This will be a unique opportunity to directly measure the interior composition of a rocky planet, which may tell us a lot about the diversity and potential habitability of terrestrial planets outside our solar system,” Hon says.

The researchers also will look through TESS data for signs of other disintegrating worlds.

“Sometimes with the food comes the appetite, and we are now trying to initiate the search for exactly these kinds of objects,” Shporer says. “These are weird objects, and the shape of the signal changes over time, which is something that’s difficult for us to find. But it’s something we’re actively working on.”

This work was supported, in part, by NASA.


MIT’s McGovern Institute is shaping brain science and improving human lives on a global scale

A quarter century after its founding, the McGovern Institute reflects on its discoveries in the areas of neuroscience, neurotechnology, artificial intelligence, brain-body connections, and therapeutics.


In 2000, Patrick J. McGovern ’59 and Lore Harp McGovern made an extraordinary gift to establish the McGovern Institute for Brain Research at MIT, driven by their deep curiosity about the human mind and their belief in the power of science to change lives. Their $350 million pledge began with a simple yet audacious vision: to understand the human brain in all its complexity, and to leverage that understanding for the betterment of humanity.
 
Twenty-five years later, the McGovern Institute stands as a testament to the power of interdisciplinary collaboration, continuing to shape our understanding of the brain and improve the quality of life for people worldwide.

In the beginning

“This is, by any measure, a truly historic moment for MIT,” said MIT’s 15th president, Charles M. Vest, during his opening remarks at an event in 2000 to celebrate the McGovern gift agreement. “The creation of the McGovern Institute will launch one of the most profound and important scientific ventures of this century in what surely will be a cornerstone of MIT scientific contributions from the decades ahead.”
 
Vest tapped Phillip A. Sharp, MIT Institute professor emeritus of biology and Nobel laureate, to lead the institute, and appointed six MIT professors — Emilio Bizzi, Martha Constantine-Paton, Ann Graybiel PhD ’71, H. Robert Horvitz ’68, Nancy Kanwisher ’80, PhD ’86, and Tomaso Poggio — to represent its founding faculty.  Construction began in 2003 on Building 46, a 376,000 square foot research complex at the northeastern edge of campus. MIT’s new “gateway from the north” would eventually house the McGovern Institute, the Picower Institute for Learning and Memory, and MIT’s Department of Brain and Cognitive Sciences.

Robert Desimone, the Doris and Don Berkey Professor of Neuroscience at MIT, succeeded Sharp as director of the McGovern Institute in 2005, and assembled a distinguished roster of 22 faculty members, including a Nobel laureate, a Breakthrough Prize winner, two National Medal of Science/Technology awardees, and 15 members of the American Academy of Arts and Sciences.
 
A quarter century of innovation

On April 11, 2025, the McGovern Institute celebrated its 25th anniversary with a half-day symposium featuring presentations by MIT Institute Professor Robert Langer, alumni speakers from various McGovern labs, and Desimone, who is in his 20th year as director of the institute.

Desimone highlighted the institute’s recent discoveries, including the development of the CRISPR genome-editing system, which has culminated in the world’s first CRISPR gene therapy approved for humans — a remarkable achievement that is ushering in a new era of transformative medicine. In other milestones, McGovern researchers developed the first prosthetic limb fully controlled by the body’s nervous system; a flexible probe that taps into gut-brain communication; an expansion microscopy technique that paves the way for biology labs around the world to perform nanoscale imaging; and advanced computational models that demonstrate how we see, hear, use language, and even think about what others are thinking. Equally transformative has been the McGovern Institute’s work in neuroimaging, uncovering the architecture of human thought and establishing markers that signal the early emergence of mental illness, before symptoms even appear.

Synergy and open science
 
“I am often asked what makes us different from other neuroscience institutes and programs around the world,” says Desimone. “My answer is simple. At the McGovern Institute, the whole is greater than the sum of its parts.”
 
Many discoveries at the McGovern Institute have depended on collaborations across multiple labs, ranging from biological engineering to human brain imaging and artificial intelligence. In modern brain research, significant advances often require the joint expertise of people working in neurophysiology, behavior, computational analysis, neuroanatomy, and molecular biology. More than a dozen different MIT departments are represented by McGovern faculty and graduate students, and this synergy has led to insights and innovations that are far greater than what any single discipline could achieve alone.
 
Also baked into the McGovern ethos is a spirit of open science, where newly developed technologies are shared with colleagues around the world. Through hospital partnerships for example, McGovern researchers are testing their tools and therapeutic interventions in clinical settings, accelerating their discoveries into real-world solutions.

The McGovern legacy  

Hundreds of scientific papers have emerged from McGovern labs over the past 25 years, but most faculty would argue that it’s the people — the young researchers — that truly define the McGovern Institute. Award-winning faculty often attract the brightest young minds, but many McGovern faculty also serve as mentors, creating a diverse and vibrant scientific community that is setting the global standard for brain research and its applications. Kanwisher, for example, has guided more than 70 doctoral students and postdocs who have gone on to become leading scientists around the world. Three of her former students, Evelina Fedorenko PhD ’07, Josh McDermott PhD ’06, and Rebecca Saxe PhD ’03, the John W. Jarve (1978) Professor of Brain and Cognitive Sciences, are now her colleagues at the McGovern Institute. Other McGovern alumni shared stories of mentorship, science, and real-world impact at the 25th anniversary symposium.

Looking to the future, the McGovern community is more committed than ever to unraveling the mysteries of the brain and making a meaningful difference in lives of individuals at a global scale.
 
“By promoting team science, open communication, and cross-discipline partnerships,” says institute co-founder Lore Harp McGovern, “our culture demonstrates how individual expertise can be amplified through collective effort. I am honored to be the co-founder of this incredible institution — onward to the next 25 years!”


Making AI-generated code more accurate in any language

A new technique automatically guides an LLM toward outputs that adhere to the rules of whatever programming language or other format is being used.


Programmers can now use large language models (LLMs) to generate computer code more quickly. However, this only makes programmers’ lives easier if that code follows the rules of the programming language and doesn’t cause a computer to crash.

Some methods exist for ensuring LLMs conform to the rules of whatever language they are generating text in, but many of these methods either distort the model’s intended meaning or are too time-consuming to be feasible for complex tasks.

A new approach developed by researchers at MIT and elsewhere automatically guides an LLM to generate text that adheres to the rules of the relevant language, such as a particular programming language, and is also error-free. Their method allows an LLM to allocate efforts toward outputs that are most likely to be valid and accurate, while discarding unpromising outputs early in the process. This probabilistic approach boosts computational efficiency.

Due to these efficiency gains, the researchers’ architecture enabled small LLMs to outperform much larger models in generating accurate, properly structured outputs for several real-world use cases, including molecular biology and robotics.

In the long run, this new architecture could help nonexperts control AI-generated content. For instance, it could allow businesspeople to write complex queries in SQL, a language for database manipulation, using only natural language prompts.

“This work has implications beyond research. It could improve programming assistants, AI-powered data analysis, and scientific discovery tools by ensuring that AI-generated outputs remain both useful and correct,” says João Loula, an MIT graduate student and co-lead author of a paper on this framework.

Loula is joined on the paper by co-lead authors Benjamin LeBrun, a research assistant at the Mila-Quebec Artificial Intelligence Institute, and Li Du, a graduate student at John Hopkins University; co-senior authors Vikash Mansinghka ’05, MEng ’09, PhD ’09, a principal research scientist and leader of the Probabilistic Computing Project in the MIT Department of Brain and Cognitive Sciences; Alexander K. Lew SM ’20, an assistant professor at Yale University; Tim Vieira, a postdoc at ETH Zurich; and Timothy J. O’Donnell, an associate professor at McGill University and a Canada CIFAR AI Chair at Mila, who led the international team; as well as several others. The research will be presented at the International Conference on Learning Representations.

Enforcing structure and meaning

One common approach for controlling the structured text generated by LLMs involves checking an entire output, like a block of computer code, to make sure it is valid and will run error-free. If not, the user must start again, racking up computational resources.

On the other hand, a programmer could stop to check the output along the way. While this can ensure the code adheres to the programming language and is structurally valid, incrementally correcting the code may cause it to drift from the meaning the user intended, hurting its accuracy in the long run.

“It is much easier to enforce structure than meaning. We can quickly check whether something is in the right programming language, but to check its meaning you have to execute the code. Our work is also about dealing with these different types of information,” Loula says.

The researchers’ approach involves engineering knowledge into the LLM to steer it toward the most promising outputs. These outputs are more likely to follow the structural constraints defined by a user, and to have the meaning the user intends.

“We are not trying to train an LLM to do this. Instead, we are engineering some knowledge that an expert would have and combining it with the LLM’s knowledge, which offers a very different approach to scaling than you see in deep learning,” Mansinghka adds.

They accomplish this using a technique called sequential Monte Carlo, which enables parallel generation from an LLM to compete with each other. The model dynamically allocates resources to different threads of parallel computation based on how promising their output appears.

Each output is given a weight that represents how likely it is to be structurally valid and semantically accurate. At each step in the computation, the model focuses on those with higher weights and throws out the rest.

In a sense, it is like the LLM has an expert looking over its shoulder to ensure it makes the right choices at each step, while keeping it focused on the overall goal. The user specifies their desired structure and meaning, as well as how to check the output, then the researchers’ architecture guides the LLM to do the rest.

“We’ve worked out the hard math so that, for any kinds of constraints you’d like to incorporate, you are going to get the proper weights. In the end, you get the right answer,” Loula says.

Boosting small models

To test their approach, they applied the framework to LLMs tasked with generating four types of outputs: Python code, SQL database queries, molecular structures, and plans for a robot to follow.

When compared to existing approaches, the researchers’ method performed more accurately while requiring less computation.

In Python code generation, for instance, the researchers’ architecture enabled a small, open-source model to outperform a specialized, commercial closed-source model that is more than double its size.

“We are very excited that we can allow these small models to punch way above their weight,” Loula says.

Moving forward, the researchers want to use their technique to control larger chunks of generated text, rather than working one small piece at a time. They also want to combine their method with learning, so that as they control the outputs a model generates, it learns to be more accurate.

In the long run, this project could have broader applications for non-technical users. For instance, it could be combined with systems for automated data modeling, and querying generative models of databases.

The approach could also enable machine-assisted data analysis systems, where the user can converse with software that accurately models the meaning of the data and the questions asked by the user, adds Mansinghka.

“One of the fundamental questions of linguistics is how the meaning of words, phrases, and sentences can be grounded in models of the world, accounting for uncertainty and vagueness in meaning and reference. LLMs, predicting likely token sequences, don’t address this problem. Our paper shows that, in narrow symbolic domains, it is technically possible to map from words to distributions on grounded meanings. It’s a small step towards deeper questions in cognitive science, linguistics, and artificial intelligence needed to understand how machines can communicate about the world like we do,” says O’Donnell.

This research is funded and supported, in part, by the Canada CIFAR AI Chairs Program, the MIT Quest for Intelligence, and Convergent Research. 


New study reveals how cleft lip and cleft palate can arise

MIT biologists have found that defects in some transfer RNA molecules can lead to the formation of these common conditions.


Cleft lip and cleft palate are among the most common birth defects, occurring in about one in 1,050 births in the United States. These defects, which appear when the tissues that form the lip or the roof of the mouth do not join completely, are believed to be caused by a mix of genetic and environmental factors.

In a new study, MIT biologists have discovered how a genetic variant often found in people with these facial malformations leads to the development of cleft lip and cleft palate.

Their findings suggest that the variant diminishes cells’ supply of transfer RNA, a molecule that is critical for assembling proteins. When this happens, embryonic face cells are unable to fuse to form the lip and roof of the mouth.

“Until now, no one had made the connection that we made. This particular gene was known to be part of the complex involved in the splicing of transfer RNA, but it wasn’t clear that it played such a crucial role for this process and for facial development. Without the gene, known as DDX1, certain transfer RNA can no longer bring amino acids to the ribosome to make new proteins. If the cells can’t process these tRNAs properly, then the ribosomes can’t make protein anymore,” says Michaela Bartusel, an MIT research scientist and the lead author of the study.

Eliezer Calo, an associate professor of biology at MIT, is the senior author of the paper, which appears today in the American Journal of Human Genetics.

Genetic variants

Cleft lip and cleft palate, also known as orofacial clefts, can be caused by genetic mutations, but in many cases, there is no known genetic cause.

“The mechanism for the development of these orofacial clefts is unclear, mostly because they are known to be impacted by both genetic and environmental factors,” Calo says. “Trying to pinpoint what might be affected has been very challenging in this context.”

To discover genetic factors that influence a particular disease, scientists often perform genome-wide association studies (GWAS), which can reveal variants that are found more often in people who have a particular disease than in people who don’t.

For orofacial clefts, some of the genetic variants that have regularly turned up in GWAS appeared to be in a region of DNA that doesn’t code for proteins. In this study, the MIT team set out to figure out how variants in this region might influence the development of facial malformations.

Their studies revealed that these variants are located in an enhancer region called e2p24.2. Enhancers are segments of DNA that interact with protein-coding genes, helping to activate them by binding to transcription factors that turn on gene expression.

The researchers found that this region is in close proximity to three genes, suggesting that it may control the expression of those genes. One of those genes had already been ruled out as contributing to facial malformations, and another had already been shown to have a connection. In this study, the researchers focused on the third gene, which is known as DDX1.

DDX1, it turned out, is necessary for splicing transfer RNA (tRNA) molecules, which play a critical role in protein synthesis. Each transfer RNA molecule transports a specific amino acid to the ribosome — a cell structure that strings amino acids together to form proteins, based on the instructions carried by messenger RNA.

While there are about 400 different tRNAs found in the human genome, only a fraction of those tRNAs require splicing, and those are the tRNAs most affected by the loss of DDX1. These tRNAs transport four different amino acids, and the researchers hypothesize that these four amino acids may be particularly abundant in proteins that embryonic cells that form the face need to develop properly.

When the ribosomes need one of those four amino acids, but none of them are available, the ribosome can stall, and the protein doesn’t get made.

The researchers are now exploring which proteins might be most affected by the loss of those amino acids. They also plan to investigate what happens inside cells when the ribosomes stall, in hopes of identifying a stress signal that could potentially be blocked and help cells survive.

Malfunctioning tRNA

While this is the first study to link tRNA to craniofacial malformations, previous studies have shown that mutations that impair ribosome formation can also lead to similar defects. Studies have also shown that disruptions of tRNA synthesis — caused by mutations in the enzymes that attach amino acids to tRNA, or in proteins involved in an earlier step in tRNA splicing — can lead to neurodevelopmental disorders.

“Defects in other components of the tRNA pathway have been shown to be associated with neurodevelopmental disease,” Calo says. “One interesting parallel between these two is that the cells that form the face are coming from the same place as the cells that form the neurons, so it seems that these particular cells are very susceptible to tRNA defects.”

The researchers now hope to explore whether environmental factors linked to orofacial birth defects also influence tRNA function. Some of their preliminary work has found that oxidative stress — a buildup of harmful free radicals — can lead to fragmentation of tRNA molecules. Oxidative stress can occur in embryonic cells upon exposure to ethanol, as in fetal alcohol syndrome, or if the mother develops gestational diabetes.

“I think it is worth looking for mutations that might be causing this on the genetic side of things, but then also in the future, we would expand this into which environmental factors have the same effects on tRNA function, and then see which precautions might be able to prevent any effects on tRNAs,” Bartusel says.

The research was funded by the National Science Foundation Graduate Research Program, the National Cancer Institute, the National Institute of General Medical Sciences, and the Pew Charitable Trusts.


A chemist who tinkers with molecules’ structures

By changing how atoms in a molecule are arranged relative to each other, Associate Professor Alison Wendlandt aims to create compounds with new chemical properties.


Many biological molecules exist as “diastereomers” — molecules that have the same chemical structure but different spatial arrangements of their atoms. In some cases, these slight structural differences can lead to significant changes in the molecules’ functions or chemical properties.

As one example, the cancer drug doxorubicin can have heart-damaging side effects in a small percentage of patients. However, a diastereomer of the drug, known as epirubicin, which has a single alcohol group that points in a different direction, is much less toxic to heart cells.

“There are a lot of examples like that in medicinal chemistry where something that seems small, such as the position of a single atom in space, may actually be really profound,” says Alison Wendlandt, an associate professor of chemistry at MIT.

Wendlandt’s lab is focused on designing new tools that can convert these molecules into different forms.  Her group is also working on similar tools that can change a molecule into a different constitutional isomer — a molecule that has an atom or chemical group located in a different spot, even though it has the same chemical formula as the original.

“If you have a target molecule and you needed to make it without such a tool, you would have to go back to the beginning and make the whole molecule again to get to the final structure that you wanted,” Wendlandt says.

These tools can also lend themselves to creating entirely new molecules that might be difficult or even impossible to build using traditional chemical synthesis techniques.

“We’re focused on a broad suite of selective transformations, the goal being to make the biggest impact on how you might envision making a molecule,” she says. “If you are able to open up access to the interconversion of molecular structures, you can then think completely differently about how you would make a molecule.”

From math to chemistry

As the daughter of two geologists, Wendlandt found herself immersed in science from a young age. Both of her parents worked at the Colorado School of Mines, and family vacations often involved trips to interesting geological formations.

In high school, she found math more appealing than chemistry, and she headed to the University of Chicago with plans to major in mathematics. However, she soon had second thoughts, after encountering abstract math.

“I was good at calculus and the kind of math you need for engineering, but when I got to college and I encountered topology and N-dimensional geometry, I realized I don’t actually have the skills for abstract math. At that point I became a little bit more open-minded about what I wanted to study,” she says.

Though she didn’t think she liked chemistry, an organic chemistry course in her sophomore year changed her mind.

“I loved the problem-solving aspect of it. I have a very, very bad memory, and I couldn’t memorize my way through the class, so I had to just learn it, and that was just so fun,” she says.

As a chemistry major, she began working in a lab focused on “total synthesis,” a research area that involves developing strategies to synthesize a complex molecule, often a natural compound, from scratch.

Although she loved organic chemistry, a lab accident — an explosion that injured a student in her lab and led to temporary hearing loss for Wendlandt — made her hesitant to pursue it further. When she applied to graduate schools, she decided to go into a different branch of chemistry — chemical biology. She studied at Yale University for a couple of years, but she realized that she didn’t enjoy that type of chemistry and left after receiving a master’s degree.

She worked in a lab at the University of Kentucky for a few years, then applied to graduate school again, this time at the University of Wisconsin. There, she worked in an organic chemistry lab, studying oxidation reactions that could be used to generate pharmaceuticals or other useful compounds from petrochemicals.

After finishing her PhD in 2015, Wendlandt went to Harvard University for a postdoc, working with chemistry professor Eric Jacobsen. There, she became interested in selective chemical reactions that generate a particular isomer, and began studying catalysts that could perform glycosylation — the addition of sugar molecules to other molecules — at specific sites.

Editing molecules

Since joining the MIT faculty in 2018, Wendlandt has worked on developing catalysts that can convert a molecule into its mirror image or an isomer of the original.

In 2022, she and her students developed a tool called a stereo-editor, which can alter the arrangement of chemical groups around a central atom known as a stereocenter. This editor consists of two catalysts that work together to first add enough energy to remove an atom from a stereocenter, then replace it with an atom that has the opposite orientation. That energy input comes from a photocatalyst, which converts captured light into energy.

“If you have a molecule with an existing stereocenter, and you need the other enantiomer, typically you would have to start over and make the other enantiomer. But this new method tries to interconvert them directly, so it gives you a way of thinking about molecules as dynamic,” Wendlandt says. “You could generate any sort of three-dimensional structure of that molecule, and then in an independent step later, you could completely reorganize the 3D structure.”

She has also developed tools that can convert common sugars such as glucose into other isomers, including allose and other sugars that are difficult to isolate from natural sources, and tools that can create new isomers of steroids and alcohols. She is now working on ways to convert six-membered carbon rings to seven or eight-membered rings, and to add, subtract, or replace some of the chemical groups attached to the rings.

“I’m interested in creating general tools that will allow us to interconvert static structures. So, that may be taking a certain functional group and moving it to another part of the molecule entirely, or taking large rings and making them small rings,” she says. “Instead of thinking of molecules that we assemble as static, we’re thinking about them now as potentially dynamic structures, which could change how we think about making organic molecules.”

This approach also opens up the possibility of creating brand new molecules that haven’t been seen before, Wendlandt says. This could be useful, for example, to create drug molecules that interact with a target enzyme in just the right way.

“There’s a huge amount of chemical space that’s still unknown, bizarre chemical space that just has not been made. That’s in part because maybe no one has been interested in it, or because it’s just too hard to make that specific thing,” she says. “These kinds of tools give you access to isomers that are maybe not easily made.”


Restoring healthy gene expression with programmable therapeutics

CAMP4 Therapeutics is targeting regulatory RNA, whose role in gene expression was first described by co-founder and MIT Professor Richard Young.


Many diseases are caused by dysfunctional gene expression that leads to too much or too little of a given protein. Efforts to cure those diseases include everything from editing genes to inserting new genetic snippets into cells to injecting the missing proteins directly into patients.

CAMP4 is taking a different approach. The company is targeting a lesser-known player in the regulation of gene expression known as regulatory RNA. CAMP4 co-founder and MIT Professor Richard Young has shown that by interacting with molecules called transcription factors, regulatory RNA plays an important role in controlling how genes are expressed. CAMP4’s therapeutics target regulatory RNA to increase the production of proteins and put patients’ levels back into healthy ranges.

The company’s approach holds promise for treating diseases caused by defects in gene expression, such as metabolic diseases, heart conditions, and neurological disorders. Targeting regulatory RNAs as opposed to genes could also offer more precise treatments than existing approaches.

“If I just want to fix a single gene’s defective protein output, I don’t want to introduce something that makes that protein at high, uncontrolled amounts,” says Young, who is also a core member of the Whitehead Institute. “That’s a huge advantage of our approach: It’s more like a correction than sledgehammer.”

CAMP4’s lead drug candidate targets urea cycle disorders (UCDs), a class of chronic conditions caused by a genetic defect that limits the body’s ability to metabolize and excrete ammonia. A phase 1 clinical trial has shown CAMP4’s treatment is safe and tolerable for humans, and in preclinical studies the company has shown its approach can be used to target specific regulatory RNA in the cells of humans with UCDs to restore gene expression to healthy levels.

“This has the potential to treat very severe symptoms associated with UCDs,” says Young, who co-founded CAMP4 with cancer genetics expert Leonard Zon, a professor at Harvard Medical School. “These diseases can be very damaging to tissues and causes a lot of pain and distress. Even a small effect in gene expression could have a huge benefit to patients, who are generally young.”

Mapping out new therapeutics

Young, who has been a professor at MIT since 1984, has spent decades studying how genes are regulated. It’s long been known that molecules called transcription factors, which orchestrate gene expression, bind to DNA and proteins. Research published in Young’s lab uncovered a previously unknown way in which transcription factors can also bind to RNA. The finding indicated RNA plays an underappreciated role in controlling gene expression.

CAMP4 was founded in 2016 with the initial idea of mapping out the signaling pathways that govern the expression of genes linked to various diseases. But as Young’s lab discovered and then began to characterize the role of regulatory RNA in gene expression around 2020, the company pivoted to focus on targeting regulatory RNA using therapeutic molecules known as antisense oligonucleotides (ASOs), which have been used for years to target specific messenger RNA sequences.

CAMP4 began mapping the active regulatory RNAs associated with the expression of every protein-coding gene and built a database, which it calls its RAP Platform, that helps it quickly identify regulatory RNAs to target  specific diseases and select ASOs that will most effectively bind to those RNAs.

Today, CAMP4 is using its platform to develop therapeutic candidates it believes can restore healthy protein levels to patients.

“The company has always been focused on modulating gene expression,” says CAMP4 Chief Financial Officer Kelly Gold MBA ’09. “At the simplest level, the foundation of many diseases is too much or too little of something being produced by the body. That is what our approach aims to correct.”

Accelerating impact

CAMP4 is starting by going after diseases of the liver and the central nervous system, where the safety and efficacy of ASOs has already been proven. Young believes correcting genetic expression without modulating the genes themselves will be a powerful approach to treating a range of complex diseases.

“Genetics is a powerful indicator of where a deficiency lies and how you might reverse that problem,” Young says. “There are many syndromes where we don’t have a complete understanding of the underlying mechanism of disease. But when a mutation clearly affects the output of a gene, you can now make a drug that can treat the disease without that complete understanding.”

As the company continues mapping the regulatory RNAs associated with every gene, Gold hopes CAMP4 can eventually minimize its reliance on wet-lab work and lean more heavily on machine learning to leverage its growing database and quickly identify regRNA targets for every disease it wants to treat.

In addition to its trials in urea cycle disorders, the company plans to launch key preclinical safety studies for a candidate targeting seizure disorders with a genetic basis, this year. And as the company continues exploring drug development efforts around the thousands of genetic diseases where increasing protein levels are can have a meaningful impact, it’s also considering collaborating with others to accelerate its impact.

“I can conceive of companies using a platform like this to go after many targets, where partners fund the clinical trials and use CAMP4 as an engine to target any disease where there’s a suspicion that gene upregulation or downregulation is the way to go,” Young says.


A visual pathway in the brain may do more than recognize objects

New research using computational vision models suggests the brain’s “ventral stream” might be more versatile than previously thought.


When visual information enters the brain, it travels through two pathways that process different aspects of the input. For decades, scientists have hypothesized that one of these pathways, the ventral visual stream, is responsible for recognizing objects, and that it might have been optimized by evolution to do just that.

Consistent with this, in the past decade, MIT scientists have found that when computational models of the anatomy of the ventral stream are optimized to solve the task of object recognition, they are remarkably good predictors of the neural activities in the ventral stream.

However, in a new study, MIT researchers have shown that when they train these types of models on spatial tasks instead, the resulting models are also quite good predictors of the ventral stream’s neural activities. This suggests that the ventral stream may not be exclusively optimized for object recognition.

“This leaves wide open the question about what the ventral stream is being optimized for. I think the dominant perspective a lot of people in our field believe is that the ventral stream is optimized for object recognition, but this study provides a new perspective that the ventral stream could be optimized for spatial tasks as well,” says MIT graduate student Yudi Xie.

Xie is the lead author of the study, which will be presented at the International Conference on Learning Representations. Other authors of the paper include Weichen Huang, a visiting student through MIT’s Research Science Institute program; Esther Alter, a software engineer at the MIT Quest for Intelligence; Jeremy Schwartz, a sponsored research technical staff member; Joshua Tenenbaum, a professor of brain and cognitive sciences; and James DiCarlo, the Peter de Florez Professor of Brain and Cognitive Sciences, director of the Quest for Intelligence, and a member of the McGovern Institute for Brain Research at MIT.

Beyond object recognition

When we look at an object, our visual system can not only identify the object, but also determine other features such as its location, its distance from us, and its orientation in space. Since the early 1980s, neuroscientists have hypothesized that the primate visual system is divided into two pathways: the ventral stream, which performs object-recognition tasks, and the dorsal stream, which processes features related to spatial location.

Over the past decade, researchers have worked to model the ventral stream using a type of deep-learning model known as a convolutional neural network (CNN). Researchers can train these models to perform object-recognition tasks by feeding them datasets containing thousands of images along with category labels describing the images.

The state-of-the-art versions of these CNNs have high success rates at categorizing images. Additionally, researchers have found that the internal activations of the models are very similar to the activities of neurons that process visual information in the ventral stream. Furthermore, the more similar these models are to the ventral stream, the better they perform at object-recognition tasks. This has led many researchers to hypothesize that the dominant function of the ventral stream is recognizing objects.

However, experimental studies, especially a study from the DiCarlo lab in 2016, have found that the ventral stream appears to encode spatial features as well. These features include the object’s size, its orientation (how much it is rotated), and its location within the field of view. Based on these studies, the MIT team aimed to investigate whether the ventral stream might serve additional functions beyond object recognition.

“Our central question in this project was, is it possible that we can think about the ventral stream as being optimized for doing these spatial tasks instead of just categorization tasks?” Xie says.

To test this hypothesis, the researchers set out to train a CNN to identify one or more spatial features of an object, including rotation, location, and distance. To train the models, they created a new dataset of synthetic images. These images show objects such as tea kettles or calculators superimposed on different backgrounds, in locations and orientations that are labeled to help the model learn them.

The researchers found that CNNs that were trained on just one of these spatial tasks showed a high level of “neuro-alignment” with the ventral stream — very similar to the levels seen in CNN models trained on object recognition.

The researchers measure neuro-alignment using a technique that DiCarlo’s lab has developed, which involves asking the models, once trained, to predict the neural activity that a particular image would generate in the brain. The researchers found that the better the models performed on the spatial task they had been trained on, the more neuro-alignment they showed.

“I think we cannot assume that the ventral stream is just doing object categorization, because many of these other functions, such as spatial tasks, also can lead to this strong correlation between models’ neuro-alignment and their performance,” Xie says. “Our conclusion is that you can optimize either through categorization or doing these spatial tasks, and they both give you a ventral-stream-like model, based on our current metrics to evaluate neuro-alignment.”

Comparing models

The researchers then investigated why these two approaches — training for object recognition and training for spatial features — led to similar degrees of neuro-alignment. To do that, they performed an analysis known as centered kernel alignment (CKA), which allows them to measure the degree of similarity between representations in different CNNs. This analysis showed that in the early to middle layers of the models, the representations that the models learn are nearly indistinguishable.

“In these early layers, essentially you cannot tell these models apart by just looking at their representations,” Xie says. “It seems like they learn some very similar or unified representation in the early to middle layers, and in the later stages they diverge to support different tasks.”

The researchers hypothesize that even when models are trained to analyze just one feature, they also take into account “non-target” features — those that they are not trained on. When objects have greater variability in non-target features, the models tend to learn representations more similar to those learned by models trained on other tasks. This suggests that the models are using all of the information available to them, which may result in different models coming up with similar representations, the researchers say.

“More non-target variability actually helps the model learn a better representation, instead of learning a representation that’s ignorant of them,” Xie says. “It’s possible that the models, although they’re trained on one target, are simultaneously learning other things due to the variability of these non-target features.”

In future work, the researchers hope to develop new ways to compare different models, in hopes of learning more about how each one develops internal representations of objects based on differences in training tasks and training data.

“There could be still slight differences between these models, even though our current way of measuring how similar these models are to the brain tells us they’re on a very similar level. That suggests maybe there’s still some work to be done to improve upon how we can compare the model to the brain, so that we can better understand what exactly the ventral stream is optimized for,” Xie says.

The research was funded by the Semiconductor Research Corporation and the U.S. Defense Advanced Research Projects Agency.


Hundred-year storm tides will occur every few decades in Bangladesh, scientists report

With projected global warming, the frequency of extreme storms will ramp up by the end of the century, according to a new study.


Tropical cyclones are hurricanes that brew over the tropical ocean and can travel over land, inundating coastal regions. The most extreme cyclones can generate devastating storm tides — seawater that is heightened by the tides and swells onto land, causing catastrophic flood events in coastal regions. A new study by MIT scientists finds that, as the planet warms, the recurrence of destructive storm tides will increase tenfold for one of the hardest-hit regions of the world.

In a study appearing today in One Earth, the scientists report that, for the highly populated coastal country of Bangladesh, what was once a 100-year event could now strike every 10 years — or more often — by the end of the century. 

In a future where fossil fuels continue to burn as they do today, what was once considered a catastrophic, once-in-a-century storm tide will hit Bangladesh, on average, once per decade. And the kind of storm tides that have occurred every decade or so will likely batter the country’s coast more frequently, every few years.

Bangladesh is one of the most densely populated countries in the world, with more than 171 million people living in a region roughly the size of New York state. The country has been historically vulnerable to tropical cyclones, as it is a low-lying delta that is easily flooded by storms and experiences a seasonal monsoon. Some of the most destructive floods in the world have occurred in Bangladesh, where it’s been increasingly difficult for agricultural economies to recover.

The study also finds that Bangladesh will likely experience tropical cyclones that overlap with the months-long monsoon season. Until now, cyclones and the monsoon have occurred at separate times during the year. But as the planet warms, the scientists’ modeling shows that cyclones will push into the monsoon season, causing back-to-back flooding events across the country.

“Bangladesh is very active in preparing for climate hazards and risks, but the problem is, everything they’re doing is more or less based on what they’re seeing in the present climate,” says study co-author Sai Ravela, principal research scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “We are now seeing an almost tenfold rise in the recurrence of destructive storm tides almost anywhere you look in Bangladesh. This cannot be ignored. So, we think this is timely, to say they have to pause and revisit how they protect against these storms.”

Ravela’s co-authors are Jiangchao Qiu, a postdoc in EAPS, and Kerry Emanuel, professor emeritus of atmospheric science at MIT.

Height of tides

In recent years, Bangladesh has invested significantly in storm preparedness, for instance in improving its early-warning system, fortifying village embankments, and increasing access to community shelters. But such preparations have generally been based on the current frequency of storms.

In this new study, the MIT team aimed to provide detailed projections of extreme storm tide hazards, which are flooding events where tidal effects amplify cyclone-induced storm surge, in Bangladesh under various climate-warming scenarios and sea-level rise projections.

“A lot of these events happen at night, so tides play a really strong role in how much additional water you might get, depending on what the tide is,” Ravela explains.

To evaluate the risk of storm tide, the team first applied a method of physics-based downscaling, which Emanuel’s group first developed over 20 years ago and has been using since to study hurricane activity in different parts of the world. The technique involves a low-resolution model of the global ocean and atmosphere that is embedded with a finer-resolution model that simulates weather patterns as detailed as a single hurricane. The researchers then scatter hurricane “seeds” in a region of interest and run the model forward to observe which seeds grow and make landfall over time.

To the downscaled model, the researchers incorporated a hydrodynamical model, which simulates the height of a storm surge, given the pattern and strength of winds at the time of a given storm. For any given simulated storm, the team also tracked the tides, as well as effects of sea level rise, and incorporated this information into a numerical model that calculated the storm tide, or the height of the water, with tidal effects as a storm makes landfall.

Extreme overlap

With this framework, the scientists simulated tens of thousands of potential tropical cyclones near Bangladesh, under several future climate scenarios, ranging from one that resembles the current day to one in which the world experiences further warming as a result of continued fossil fuel burning. For each simulation, they recorded the maximum storm tides along the coast of Bangladesh and noted the frequency of storm tides of various heights in a given climate scenario.

“We can look at the entire bucket of simulations and see, for this storm tide of say, 3 meters, we saw this many storms, and from that you can figure out the relative frequency of that kind of storm,” Qiu says. “You can then invert that number to a return period.”

A return period is the time it takes for a storm of a particular type to make landfall again. A storm that is considered a “100-year event” is typically more powerful and destructive, and in this case, creates more extreme storm tides, and therefore more catastrophic flooding, compared to a 10-year event.

From their modeling, Ravela and his colleagues found that under a scenario of increased global warming, the storms that previously were considered 100-year events, producing the highest storm tide values, can recur every decade or less by late-century. They also observed that, toward the end of this century, tropical cyclones in Bangladesh will occur across a broader seasonal window, potentially overlapping in certain years with the seasonal monsoon season.

“If the monsoon rain has come in and saturated the soil, a cyclone then comes in and it makes the problem much worse,” Ravela says. “People won’t have any reprieve between the extreme storm and the monsoon. There are so many compound and cascading effects between the two. And this only emerges because warming happens.”

Ravela and his colleagues are using their modeling to help experts in Bangladesh better evaluate and prepare for a future of increasing storm risk. And he says that the climate future for Bangladesh is in some ways not unique to this part of the world.

“This climate change story that is playing out in Bangladesh in a certain way will be playing out in a different way elsewhere,” Ravela notes. “Maybe where you are, the story is about heat stress, or amplifying droughts, or wildfires. The peril is different. But the underlying catastrophe story is not that different.”

This research is supported in part by the MIT Climate Resilience Early Warning Systems Climate Grand Challenges project, the Jameel Observatory JO-CREWSNet project; MIT Weather and Climate Extremes Climate Grand Challenges project; and Schmidt Sciences, LLC. 


Molecules that fight infection also act on the brain, inducing anxiety or sociability

New research on a cytokine called IL-17 adds to growing evidence that immune molecules can influence behavior during illness.


Immune molecules called cytokines play important roles in the body’s defense against infection, helping to control inflammation and coordinating the responses of other immune cells. A growing body of evidence suggests that some of these molecules also influence the brain, leading to behavioral changes during illness.

Two new studies from MIT and Harvard Medical School, focused on a cytokine called IL-17, now add to that evidence. The researchers found that IL-17 acts on two distinct brain regions — the amygdala and the somatosensory cortex — to exert two divergent effects. In the amygdala, IL-17 can elicit feelings of anxiety, while in the cortex it promotes sociable behavior.

These findings suggest that the immune and nervous systems are tightly interconnected, says Gloria Choi, an associate professor of brain and cognitive sciences, a member of MIT’s Picower Institute for Learning and Memory, and one of the senior authors of the studies.

“If you’re sick, there’s so many more things that are happening to your internal states, your mood, and your behavioral states, and that’s not simply you being fatigued physically. It has something to do with the brain,” she says.

Jun Huh, an associate professor of immunology at Harvard Medical School, is also a senior author of both studies, which appear today in Cell. One of the papers was led by Picower Institute Research Scientist Byeongjun Lee and former Picower Institute research scientist Jeong-Tae Kwon, and the other was led by Harvard Medical School postdoc Yunjin Lee and Picower Institute postdoc Tomoe Ishikawa.

Behavioral effects

Choi and Huh became interested in IL-17 several years ago, when they found it was involved in a phenomenon known as the fever effect. Large-scale studies of autistic children have found that for many of them, their behavioral symptoms temporarily diminish when they have a fever.

In a 2019 study in mice, Choi and Huh showed that in some cases of infection, IL-17 is released and suppresses a small region of the brain’s cortex known as S1DZ. Overactivation of neurons in this region can lead to autism-like behavioral symptoms in mice, including repetitive behaviors and reduced sociability.

“This molecule became a link that connects immune system activation, manifested as a fever, to changes in brain function and changes in the animals’ behavior,” Choi says.

IL-17 comes in six different forms, and there are five different receptors that can bind to it. In their two new papers, the researchers set out to map which of these receptors are expressed in different parts of the brain. This mapping revealed that a pair of receptors known as IL-17RA and IL-17RB is found in the cortex, including in the S1DZ region that the researchers had previously identified. The receptors are located in a population of neurons that receive proprioceptive input and are involved in controlling behavior.

When a type of IL-17 known as IL-17E binds to these receptors, the neurons become less excitable, which leads to the behavioral effects seen in the 2019 study.

“IL-17E, which we’ve shown to be necessary for behavioral mitigation, actually does act almost exactly like a neuromodulator in that it will immediately reduce these neurons’ excitability,” Choi says. “So, there is an immune molecule that’s acting as a neuromodulator in the brain, and its main function is to regulate excitability of neurons.”

Choi hypothesizes that IL-17 may have originally evolved as a neuromodulator, and later on was appropriated by the immune system to play a role in promoting inflammation. That idea is consistent with previous work showing that in the worm C. elegans, IL-17 has no role in the immune system but instead acts on neurons. Among its effects in worms, IL-17 promotes aggregation, a form of social behavior. Additionally, in mammals, IL-17E is actually made by neurons in the cortex, including S1DZ.

“There’s a possibility that a couple of forms of IL-17 perhaps evolved first and foremost to act as a neuromodulator in the brain, and maybe later were hijacked by the immune system also to act as immune modulators,” Choi says.

Provoking anxiety

In the other Cell paper, the researchers explored another brain location where they found IL-17 receptors — the amygdala. This almond-shaped structure plays an important role in processing emotions, including fear and anxiety.

That study revealed that in a region known as the basolateral amygdala (BLA), the IL-17RA and IL-17RE receptors, which work as a pair, are expressed in a discrete population of neurons. When these receptors bind to IL-17A and IL-17C, the neurons become more excitable, leading to an increase in anxiety.

The researchers also found that, counterintuitively, if animals are treated with antibodies that block IL-17 receptors, it actually increases the amount of IL-17C circulating in the body. This finding may help to explain unexpected outcomes observed in a clinical trial of a drug targeting the IL-17-RA receptor for psoriasis treatment, particularly regarding its potential adverse effects on mental health.

“We hypothesize that there’s a possibility that the IL-17 ligand that is upregulated in this patient cohort might act on the brain to induce suicide ideation, while in animals there is an anxiogenic phenotype,” Choi says.

During infections, this anxiety may be a beneficial response, keeping the sick individual away from others to whom the infection could spread, Choi hypothesizes.

“Other than its main function of fighting pathogens, one of the ways that the immune system works is to control the host behavior, to protect the host itself and also protect the community the host belongs to,” she says. “One of the ways the immune system is doing that is to use cytokines, secreted factors, to go to the brain as communication tools.”

The researchers found that the same BLA neurons that have receptors for IL-17 also have receptors for IL-10, a cytokine that suppresses inflammation. This molecule counteracts the excitability generated by IL-17, giving the body a way to shut off anxiety once it’s no longer useful.

Distinctive behaviors

Together, the two studies suggest that the immune system, and even a single family of cytokines, can exert a variety of effects in the brain.

“We have now different combinations of IL-17 receptors being expressed in different populations of neurons, in two different brain regions, that regulate very distinct behaviors. One is actually somewhat positive and enhances social behaviors, and another is somewhat negative and induces anxiogenic phenotypes,” Choi says.

Her lab is now working on additional mapping of IL-17 receptor locations, as well as the IL-17 molecules that bind to them, focusing on the S1DZ region. Eventually, a better understanding of these neuro-immune interactions may help researchers develop new treatments for neurological conditions such as autism or depression.

“The fact that these molecules are made by the immune system gives us a novel approach to influence brain function as means of therapeutics,” Choi says. “Instead of thinking about directly going for the brain, can we think about doing something to the immune system?”

The research was funded, in part, by Jeongho Kim and the Brain Impact Foundation Neuro-Immune Fund, the Simons Foundation Autism Research Initiative, the Simons Center for the Social Brain, the Marcus Foundation, the N of One: Autism Research Foundation, the Burroughs Wellcome Fund, the Picower Institute Innovation Fund, the MIT John W. Jarve Seed Fund for Science Innovation, Young Soo Perry and Karen Ha, and the National Institutes of Health.