Volcanoes and wildfires can inject millions of tons of gases and aerosol particles into the air, affecting temperatures on a global scale. But picking out the specific impact of individual events against a background of many contributing factors is like listening for one person’s voice from across a crowded concourse.
MIT scientists now have a way to quiet the noise and identify the specific signal of wildfires and volcanic eruptions, including their effects on Earth’s global atmospheric temperatures.
In a study appearing this week in the Proceedings of the National Academy of Sciences, the researchers report that they detected statistically significant changes in global atmospheric temperatures in response to three major natural events: the eruption of Mount Pinatubo in 1991, the Australian wildfires in 2019-2020, and the eruption of the underwater volcano Hunga Tonga in the South Pacific in 2022.
While the specifics of each event differed, all three events appeared to significantly affect temperatures in the stratosphere. The stratosphere lies above the troposphere, which is the lowest layer of the atmosphere, closest to the surface, where global warming has accelerated in recent years. In the new study, Pinatubo showed the classic pattern of stratospheric warming paired with tropospheric cooling. The Australian wildfires and the Hunga Tonga eruption also showed significant warming or cooling in the stratosphere, respectively, but they did not produce a robust, globally detectable tropospheric signal over the first two years following each event. This new understanding will help scientists further pin down the effect of human-related emissions on global temperature change.
“Understanding the climate responses to natural forcings is essential for us to interpret anthropogenic climate change,” says study author Yaowei Li, a former postdoc and currently a visiting scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “Unlike the global tropospheric and surface cooling caused by Pinatubo, our results also indicate that the Australian wildfires and Hunga Tonga eruption may not have played a role in the acceleration of global surface warming in recent years. So, there must be some other factors.”
The study’s co-authors include Susan Solomon, the Lee and Geraldine Martin Professor of Environmental Studies and Chemistry at MIT, along with Benjamin Santer of the University of East Anglia, David Thompson of the University of East Anglia and Colorado State University, and Qiang Fu of the University of Washington.
Extraordinary events
The past several years have set back-to-back records for global average surface temperatures. The World Meteorological Organization recently confirmed that the years 2023 to 2025 were the three warmest years on record, while the past 11 years have been the 11 warmest years ever recorded. The world is warming, due mainly to human activities that have emitted huge amounts of greenhouse gases into the atmosphere over centuries.
In addition to greenhouse gases, the atmosphere has been on the receiving end of other large-scale emissions, including sulfur gases and water vapor from volcanic eruptions and smoke particles from wildfires. Li and his colleagues have wondered whether such natural events could have any global impact on temperatures, and whether such an effect would be detectable.
“These events are extraordinary and very unique in terms of the different materials they inject into different altitudes,” Li says. “So we asked the question: Do these events actually perturb the global temperature to a degree that could be identifiable from natural, meteorological noise, and could they contribute to some of the exceptional global surface warming we’ve seen in the last few years?”
In particular, the team looked for signals of global temperature change in response to three large-scale natural events. The Pinatubo eruption resulted in around 20 million tons of volcanic aerosols in the stratosphere, which was the largest volume ever recorded by modern satellite instruments. The Australian fires injected around 1 million tons of smoke particles into the upper troposphere and stratosphere. And the Hunga Tonga eruption produced the largest atmospheric explosion on satellite record, launching nearly 150 million tons of water vapor into the stratosphere.
If any natural event could measurably shift global temperatures, the team reasoned, it would be any of these three.
Natural signals
For their new study, the team took a signal-to-noise approach. They looked to minimize “noise” from other known influences on global temperatures in order to isolate the “signal,” such as a change in temperature associated specifically with one of the three natural events.
To do so, they looked first through satellite measurements taken by the Stratospheric Sounding Unit (SSU) and the Microwave and Advanced Microwave Sounding Units (MSU), which have been measuring global temperatures at different altitudes throughout the atmosphere since 1979. The team compiled SSU and MSU measurements from 1986 to the present day. From these measurements, the researchers could see long-term trends of steady tropospheric warming and stratospheric cooling. Those long-term trends are largely associated with anthropogenic greenhouse gases, which the team subtracted from the dataset.
What was left over was more of a level baseline, which still contained some confounding noise, in the form of natural variability. Global temperature changes can also be affected by phenomena such as El Niño and La Niña, which naturally warm and cool the Earth every few years. The sun also swings global temperatures on a roughly 11-year cycle. The team took this natural variability into account, and subtracted out the effects of these influences.
After minimizing such noise from their dataset, the team reasoned that whatever temperature changes remained could be more easily traced to the three large-scale natural events and quantified. And indeed, when they pinned the events to the temperature measurements, at the times that they occurred, they could plainly see how each event influenced temperatures around the world.
The team found that Pinatubo decreased global tropospheric temperatures by up to about 0.7 degree Celsius, for more than two years following the eruption. The volcanic sulfate aerosols essentially acted as many tiny reflectors, cooling the troposphere and surface by scattering sunlight back into space. At the same time, the aerosols, which remained in the stratosphere, also absorbed heat that was emitted from the surface, subsequently warming the stratosphere.
This finding agreed with many other studies of the event, which confirmed that the team’s approach is accurate. They applied the same method to the 2019-2020 Australian wildfires, and the 2022 underwater eruption — events where the influence on global temperatures is less clear.
For the Australian wildfires, they found that the smoke particles caused the global stratosphere to warm up, by up to about 0.77 degree Celsius, which persisted for about five months but did not produce a clear global tropospheric signal.
“In the end we found that the wildfire smoke caused a very strong warming in the stratosphere, because these materials are very different chemically from sulfate,” Li explains. “They are particles that are dark colored, meaning they are efficient at absorbing solar radiation. So, a relatively small amount of smoke particles can cause a dramatic warming.”
In the case of the Hunga Tonga, the underwater eruption triggered a global cooling effect in the middle-to-upper stratosphere, of up to about half a degree Celsius, lasting for several years.
“The Australian fires and the Hunga Tonga really packed a punch at stratospheric altitudes, and this study shows for the first time how to quantify how strong that punch was,” says Solomon. “I find their impact up high quite remarkable, but the ongoing issue is why the last several years have been so warm lower down, in the troposphere — ruling out those natural events points even more strongly at human influences.”
3 Questions: Exploring the mechanisms underlying changes during infectionZuri Sullivan, a new assistant professor of biology and Whitehead Institute member, studies why we get sick, and whether aspects of illness, such as disrupted appetite, contribute to host defense.With respiratory illness season in full swing, a bad night’s sleep, sore throat, and desire to cancel dinner plans could all be considered hallmark symptoms of the flu, Covid-19 or other illnesses. Although everyone has, at some point, experienced illness and these stereotypical symptoms, the mechanisms that generate them are not well understood.
Zuri Sullivan, a new assistant professor in the MIT Department of Biology and core member of the Whitehead Institute for Biomedical Research, works at the interface of neuroscience, microbiology, physiology, and immunology to study the biological workings underlying illness. In this interview, she describes her work on immunity thus far as well as research avenues — and professional collaborations — she’s excited to explore at MIT.
Q: What is immunity, and why do we get sick in the first place?
A: We can think of immunity in two ways: the antimicrobial programs that defend against a pathogen directly, and sickness, the altered organismal state that happens when we get an infection.
Sickness itself arises from brain-immune system interaction. The immune system is talking to the brain, and then the brain has a system-wide impact on host defense via its ability to have top-down control of physiologic systems and behavior. People might assume that sickness is an unintended consequence of infection, that it happens because your immune system is active, but we hypothesize that it’s likely an adaptive process that contributes to host defense.
If we consider sickness as immunity at the organismal scale, I think of my work as bridging the dynamic immunological processes that occur at the cellular scale, the tissue scale, and the organismal scale. I’m interested in the molecular and cellular mechanisms by which the immune system communicates with the brain to generate changes in behavior and physiology, such as fever, loss of appetite, and changes in social interaction.
Q: What sickness behaviors fascinate you?
A: During my thesis work at Yale University, I studied how the gut processes different nutrients and the role of the immune system in regulating gut homeostasis in response to different kinds of food. I’m especially interested in the interaction between food, the immune system, and the brain. One of the things I’m most excited about is the reduction in appetite, or changes in food choice, because we have what I would consider pretty strong evidence that these may be adaptive.
Sleep is another area we’re interested in exploring. From their own subjective experience, everyone knows that sleep is often altered during infection.
I also don’t just want to examine snapshots in time. I want to characterize changes over the course of an infection. There’s probably going to be individual variability, which I think may be in part because pathogens are also changing over the course of an illness — we’re studying two different biological systems interacting with each other.
Q: What sorts of expertise are you hoping to recruit to your lab, and what collaborations are you excited about pursuing?
A: I really want to bring together different areas of biology to think about organism-wide questions. The thing that’s most important to me is people who are creative — I’d rather trainees come in with an interesting idea than a perfectly formed question within the bounds of what we already believe to be true. I’m also interested in people who would complement my expertise; I’m fascinated by microbiology, but I don’t have any formal training.
The Whitehead Institute is really invested in interdisciplinary work, and there’s a natural synergy between my work and the other labs in this small community at the Whitehead Institute.
I’ve been collaborating with Sebastian Lourido’s lab for a few years, looking at how Toxoplasma gondii influences social behavior, and I’m excited to invest more time in that project. I’m also interested in molecular neuroscience, which is a focus of Siniša Hrvatin’s lab. That lab is interested in the hypothalamus, and trying to understand the mechanisms that generate torpor. My work also focuses on the hypothalamus because it regulates homeostatic behaviors that change during sickness, such as appetite, sleep, social behavior, and body temperature.
By studying different sickness states generated by different kinds of pathogens — parasites, viruses, bacteria — we can ask really interesting questions about how and why we get sick.
Fragile X study uncovers brain wave biomarker bridging humans and miceResearchers find mice modeling the autism spectrum disorder fragile X syndrome exhibit the same pattern of differences in low-frequency waves as humans — a new marker for treatment studies.Numerous potential treatments for neurological conditions, including autism spectrum disorders, have worked well in mice but then disappointed in humans. What would help is a non-invasive, objective readout of treatment efficacy that is shared in both species.
In a new open-access study in Nature Communications, a team of MIT researchers, backed by collaborators across the United States and in the United Kingdom, identifies such a biomarker in fragile X syndrome, the most common inherited form of autism.
Led by postdoc Sara Kornfeld-Sylla and Picower Professor Mark Bear, the team measured the brain waves of human boys and men, with or without fragile X syndrome, and comparably aged male mice, with or without the genetic alteration that models the disorder. The novel approach Kornfeld-Sylla used for analysis enabled her to uncover specific and robust patterns of differences in low-frequency brain waves between typical and fragile X brains shared between species at each age range. In further experiments, the researchers related the brain waves to specific inhibitory neural activity in the mice and showed that the biomarker was able to indicate the effects of even single doses of a candidate treatment for fragile X called arbaclofen, which enhances inhibition in the brain.
Both Kornfeld-Sylla and Bear praised and thanked colleagues at Boston Children’s Hospital, the Phelan-McDermid Syndrome Foundation, Cincinnati Children’s Hospital, the University of Oklahoma, and King’s College London for gathering and sharing data for the study.
“This research weaves together these different datasets and finds the connection between the brain wave activity that’s happening in fragile X humans that is different from typically developed humans, and in the fragile X mouse model that is different than the ‘wild-type’ mice,” says Kornfeld-Sylla, who earned her PhD in Bear’s lab in 2024 and continued the research as a FRAXA postdoc. “The cross-species connection and the collaboration really makes this paper exciting.”
Bear, a faculty member in The Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences at MIT, says having a way to directly compare brain waves can advance treatment studies.
“Because that is something we can measure in mice and humans minimally invasively, you can pose the question: If drug treatment X affects this signature in the mouse, at what dose does that same drug treatment change that same signature in a human?” Bear says. “Then you have a mapping of physiological effects onto measures of behavior. And the mapping can go both ways.”
Peaks and powers
In the study, the researchers measured EEG over the occipital lobe of humans and on the surface of the visual cortex of the mice. They measured power across the frequency spectrum, replicating previous reports of altered low-frequency brain waves in adult humans with fragile X and showing for the first time how these disruptions differ in children with fragile X.
To enable comparisons with mice, Kornfeld-Sylla subtracted out background activity to specifically isolate only “periodic” fluctuations in power (i.e., the brain waves) at each frequency. She also disregarded the typical way brain waves are grouped by frequency (into distinct bands with Greek letter designations delta, theta, alpha, beta, and gamma) so that she could simply juxtapose the periodic power spectra of the humans and mice without trying to match them band by band (e.g., trying to compare the mouse “alpha” band to the human one). This turned out to be crucial because the significant, similar patterns exhibited by the mice actually occurred in a different low-frequency band than in the humans (theta vs. alpha). Both species also had alterations in higher-frequency bands in fragile X, but Kornfeld-Sylla noted that the differences in the low-frequency brainwaves are easier to measure and more reliable in humans, making them a more promising biomarker.
So what patterns constitute the biomarker? In adult men and mice alike, a peak in the power of low-frequency waves is shifted to a significantly slower frequency in fragile X cases compared to in neurotypical cases. Meanwhile, in fragile X boys and juvenile mice, while the peak is somewhat shifted to a slower frequency, what is really significant is a reduced power in that same peak.
The researchers were also able to discern that the peak in question is actually made of two distinct subpeaks, and that the lower-frequency subpeak is the one that varies specifically with fragile X syndrome.
Curious about the neural activity underlying the measurements, the researchers engaged in experiments in which they turned off activity of two different kinds of inhibitory neurons that are known to help produce and shape brain wave patterns: somatostatin-expressing and parvalbumin-expressing interneurons. Manipulating the somatostatin neurons specifically affected the lower-frequency subpeak that contained the newly discovered biomarker in fragile X model mice.
Drug testing
Somatostatin interneurons exert their effects on the neurons they connect to via the neurotransmitter chemical GABA, and evidence from prior studies suggest that GABA receptivity is reduced in fragile X syndrome. A therapeutic approach pioneered by Bear and others has been to give the drug arbaclofen, which enhances GABA activity. In the new study, the researchers treated both control and fragile X model mice with arbaclofen to see how it affected the low-frequency biomarker.
Even the lowest administered single dose made a significant difference in the neurotypical mice, which is consistent with those mice having normal GABA responsiveness. Fragile X mice needed a higher dose, but after one was administered, there was a notable increase in the power of the key subpeak, reducing the deficit exhibited by juvenile mice.
The arbaclofen experiments therefore demonstrated that the biomarker provides a significant readout of an underlying pathophysiology of fragile X: the reduced GABA responsiveness. Bear also noted that it helped to identify a dose at which arbaclofen exerted a corrective effect, even though the drug was only administered acutely, rather than chronically. An arbaclofen therapy would, of course, be given over a long time frame, not just once.
“This is a proof of concept that a drug treatment could move this phenotype acutely in a direction that makes it closer to wild-type,” Bear says. “This effort reveals that we have readouts that can be sensitive to drug treatments.”
Meanwhile, Kornfeld-Sylla notes, there is a broad spectrum of brain disorders in which human patients exhibit significant differences in low-frequency (alpha) brain waves compared to neurotypical peers.
“Disruptions akin to the biomarker we found in this fragile X study might prove to be evident in mouse models of those other disorders, too,” she says. “Identifying this biomarker could broadly impact future translational neuroscience research.”
The paper’s other authors are Cigdem Gelegen, Jordan Norris, Francesca Chaloner, Maia Lee, Michael Khela, Maxwell Heinrich, Peter Finnie, Lauren Ethridge, Craig Erickson, Lauren Schmitt, Sam Cooke, and Carol Wilkinson.
The National Institutes of Health, the National Science Foundation, the FRAXA Foundation, the Pierce Family Fragile X Foundation, the Autism Science Foundation, the Thrasher Research Fund, Harvard University, the Simons Foundation, Wellcome, the Biotechnology and Biological Sciences Research Council, and the Freedom Together Foundation provided support for the research.
MIT faculty, alumni named 2026 Sloan Research FellowsAnnual award honors early-career researchers for creativity, innovation, and research accomplishments.Eight MIT faculty and 22 additional MIT alumni are among 126 early-career researchers honored with 2026 Sloan Research Fellowships by the Alfred P. Sloan Foundation.
The fellowships honor exceptional researchers at U.S. and Canadian educational institutions, whose creativity, innovation, and research accomplishments make them stand out as the next generation of leaders. Winners receive a two-year, $75,000 fellowship that can be used flexibly to advance the fellow’s research.
"The Sloan Research Fellows are among the most promising early-career researchers in the U.S. and Canada, already driving meaningful progress in their respective disciplines," says Stacie Bloom, president and chief executive officer of the Alfred P. Sloan Foundation. "We look forward to seeing how these exceptional scholars continue to unlock new scientific advancements, redefine their fields, and foster the well-being and knowledge of all."
Including this year’s recipients, a total of 341 MIT faculty have received Sloan Research Fellowships since the program’s inception in 1955. The MIT recipients are:
Jacopo Borga is interested in probability theory and its connections to combinatorics, and in mathematical physics. He studies various random combinatorial structures — mathematical objects such as graphs or permutations — and their patterns and behavior at a large scale. This research includes random permutons, meanders, multidimensional constrained Brownian motions, Schramm-Loewner evolutions, and Liouville quantum gravity. Borga earned bachelor’s and master’s degrees in mathematics from the Università degli Studi di Padova in Italy, and a master’s degree in mathematics from Université Sorbonne Paris Cité in France, then proceeded to complete a PhD in mathematics at Unstitut für Mathematik at the Universität Zürich in Switzerland. Borga was an assistant professor at Stanford University before joining MIT as an assistant professor of mathematics in 2024.
Anna-Christina Eilers is an astrophysicist and assistant professor at MIT’s Department of Physics as well as a member of the MIT Kavli Institute for Astrophysics and Space Research. Her work explores how black holes form and evolve across cosmic time, studying their origins and the role they play in shaping our universe. She leverages multi-wavelength data from telescopes all around the world and in space to study how the first galaxies, black holes, and quasars emerged during an epoch known as the Cosmic Dawn of our universe. She grew up in Germany and completed her PhD at the Max Planck Institute for Astronomy in Heidelberg. Subsequently, she was awarded a NASA Hubble Fellowship and a Pappalardo Fellowship to continue her research at MIT, where she joined the faculty in 2023. Her work has been recognized with several honors, including the PhD Prize of the International Astronomical Union, the Otto Hahn Medal of the Max Planck Society, and the Ludwig Biermann Prize of the German Astronomical Society.
Linlin Fan is the Samuel A. Goldblith Career Development Assistant Professor of Applied Biology in the Department of Brain and Cognitive Sciences and the Picower Institute for Learning and Memory at MIT. Her lab focuses on the development and application of advanced all-optical physiological techniques to understand the plasticity mechanisms underlying learning and memory. She has developed and applied high-speed, cellular-precision all-optical physiological techniques for simultaneously mapping and controlling membrane potential in specific neurons in behaving mammals. Prior to joining MIT, Fan was a Helen Hay Whitney Postdoctoral Fellow in Karl Deisseroth’s laboratory at Stanford University. She obtained her PhD in chemical biology from Harvard University in 2019 with Adam Cohen. Her work has been recognized by several awards, including the Larry Katz Memorial Lecture Award from the Cold Spring Harbor Laboratory, Helen Hay Whitney Fellowship, Career Award at the Scientific Interface from the Burroughs Wellcome Fund, Klingenstein-Simons Fellowship Award, Searle Scholar Award, and NARSAD Young Investigator Award.
Yoon Kim is an associate professor in the Department of EECS and a principal investigator in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and the MIT-IBM Watson AI Lab, where he works on natural language processing and machine learning. Kim earned a PhD in computer science at Harvard University, an MS in data science from New York University, an MA in statistics from Columbia University, and BA in both math and economics from Cornell University. He joined EECS in 2021, after spending a year as a postdoc at MIT-IBM Watson AI Lab.
Haihao Lu PhD ’19 is the Cecil and Ida Green Career Development Assistant Professor, and an assistant professor of operations research/statistics at the MIT Sloan School of Management. Lu’s research lies at the intersection of optimization, computation, and data science, with a focus on pushing the computational and mathematical frontiers of large-scale optimization. Much of his work is inspired by real-world challenges faced by leading technology companies and optimization software companies, such as first-order methods and scalable solvers and data-driven optimization for resource allocation. His research has had real-world impact, generating substantial revenue and advancing the state of practice in large-scale optimization, and has been recognized by several research awards. Before joining MIT Sloan, he was an assistant professor at the University of Chicago Booth School of Business and a faculty researcher at Google Research’s large-scale optimization team. He obtained his PhD in mathematics and operations research at MIT in 2019.
Brett McGuire is the Class of 1943 Career Development Associate Professor of Chemistry at MIT. He completed his undergraduate studies at the University of Illinois at Urbana-Champaign before earning an MS from Emory University and a PhD from the Caltech, both in physical chemistry. After Jansky and Hubble postdoctoral fellowships at the National Radio Astronomy Observatory, he joined the MIT faculty in 2020 and was promoted to associate professor in 2025. The McGuire Group integrates physical chemistry, molecular spectroscopy, and observational astrophysics to explore how the chemical building blocks of life evolve alongside the formation of stars and planets.
Anand Natarajan PhD ’18 is an associate professor in EECS and a principal investigator in CSAIL and the MIT-IBM Watson AI Lab. His research is mainly in quantum complexity theory, with a focus on the power of interactive proofs and arguments in a quantum world. Essentially, his work attempts to assess the complexity of computational problems in a quantum setting, determining both the limits of quantum computers’ capability and the trustworthiness of their output. Natarajan earned his PhD in physics from MIT, and an MS in computer science and BS in physics from Stanford University. Prior to joining MIT in 2020, he spent time as a postdoc at the Institute for Quantum Information and Matter at Caltech.
Mengjia Yan is an associate professor in the Department of EECS and a principal investigator in CSAIL. She is a security computer architect whose research advances secure processor design by bridging computer architecture, systems security, and formal methods. Her work identifies critical blind spots in hardware threat models and improves the resilience of real-world systems against information leakage and exploitation. Several of her discoveries have influenced commercial processor designs and contributed to changes in how hardware security risks are evaluated in practice. In parallel, Yan develops architecture-driven techniques to improve the scalability of formal verification and introduces new design principles toward formally verifiable processors. She also designed the Secure Hardware Design (SHD) course, now widely adopted by universities worldwide to teach computer architecture security from both offensive and defensive perspectives.
The following MIT alumni also received fellowships:
Ashok Ajoy PhD ’16
Chibueze Amanchukwu PhD ’17
Annie M. Bauer PhD ’17
Kimberly K. Boddy ’07
danah boyd SM ’02
Yuan Cao SM ’16, PhD ’20
Aloni Cohen SM ’15, PhD ’19
Fei Dai PhD ’19
Madison M. Douglas ’16
Philip Engel ’10
Benjamin Eysenbach ’17
Tatsunori B. Hashimoto SM ’14, PhD ’16
Xin Jin ’10
Isaac Kim ’07
Christina Patterson PhD ’19
Katelin Schutz ’14
Karthik Shekhar PhD ’15
Shriya S. Srinivasan PhD ’20
Jerzy O. Szablowski ’09
Anna Wuttig PhD ’18
Zoe Yan PhD ’20
Lingfu Zhang ’18
By now, ChatGPT, Claude, and other large language models have accumulated so much human knowledge that they’re far from simple answer-generators; they can also express abstract concepts, such as certain tones, personalities, biases, and moods. However, it’s not obvious exactly how these models represent abstract concepts to begin with from the knowledge they contain.
Now a team from MIT and the University of California San Diego has developed a way to test whether a large language model (LLM) contains hidden biases, personalities, moods, or other abstract concepts. Their method can zero in on connections within a model that encode for a concept of interest. What’s more, the method can then manipulate, or “steer” these connections, to strengthen or weaken the concept in any answer a model is prompted to give.
The team proved their method could quickly root out and steer more than 500 general concepts in some of the largest LLMs used today. For instance, the researchers could home in on a model’s representations for personalities such as “social influencer” and “conspiracy theorist,” and stances such as “fear of marriage” and “fan of Boston.” They could then tune these representations to enhance or minimize the concepts in any answers that a model generates.
In the case of the “conspiracy theorist” concept, the team successfully identified a representation of this concept within one of the largest vision language models available today. When they enhanced the representation, and then prompted the model to explain the origins of the famous “Blue Marble” image of Earth taken from Apollo 17, the model generated an answer with the tone and perspective of a conspiracy theorist.
The team acknowledges there are risks to extracting certain concepts, which they also illustrate (and caution against). Overall, however, they see the new approach as a way to illuminate hidden concepts and potential vulnerabilities in LLMs, that could then be turned up or down to improve a model’s safety or enhance its performance.
“What this really says about LLMs is that they have these concepts in them, but they’re not all actively exposed,” says Adityanarayanan “Adit” Radhakrishnan, assistant professor of mathematics at MIT. “With our method, there’s ways to extract these different concepts and activate them in ways that prompting cannot give you answers to.”
The team published their findings today in a study appearing in the journal Science. The study’s co-authors include Radhakrishnan, Daniel Beaglehole and Mikhail Belkin of UC San Diego, and Enric Boix-Adserà of the University of Pennsylvania.
A fish in a black box
As use of OpenAI’s ChatGPT, Google’s Gemini, Anthropic’s Claude, and other artificial intelligence assistants has exploded, scientists are racing to understand how models represent certain abstract concepts such as “hallucination” and “deception.” In the context of an LLM, a hallucination is a response that is false or contains misleading information, which the model has “hallucinated,” or constructed erroneously as fact.
To find out whether a concept such as “hallucination” is encoded in an LLM, scientists have often taken an approach of “unsupervised learning” — a type of machine learning in which algorithms broadly trawl through unlabeled representations to find patterns that might relate to a concept such as “hallucination.” But to Radhakrishnan, such an approach can be too broad and computationally expensive.
“It’s like going fishing with a big net, trying to catch one species of fish. You’re gonna get a lot of fish that you have to look through to find the right one,” he says. “Instead, we’re going in with bait for the right species of fish.”
He and his colleagues had previously developed the beginnings of a more targeted approach with a type of predictive modeling algorithm known as a recursive feature machine (RFM). An RFM is designed to directly identify features or patterns within data by leveraging a mathematical mechanism that neural networks — a broad category of AI models that includes LLMs — implicitly use to learn features.
Since the algorithm was an effective, efficient approach for capturing features in general, the team wondered whether they could use it to root out representations of concepts, in LLMs, which are by far the most widely used type of neural network and perhaps the least well-understood.
“We wanted to apply our feature learning algorithms to LLMs to, in a targeted way, discover representations of concepts in these large and complex models,” Radhakrishnan says.
Converging on a concept
The team’s new approach identifies any concept of interest within a LLM and “steers” or guides a model’s response based on this concept. The researchers looked for 512 concepts within five classes: fears (such as of marriage, insects, and even buttons); experts (social influencer, medievalist); moods (boastful, detachedly amused); a preference for locations (Boston, Kuala Lumpur); and personas (Ada Lovelace, Neil deGrasse Tyson).
The researchers then searched for representations of each concept in several of today’s large language and vision models. They did so by training RFMs to recognize numerical patterns in an LLM that could represent a particular concept of interest.
A standard large language model is, broadly, a neural network that takes a natural language prompt, such as “Why is the sky blue?” and divides the prompt into individual words, each of which is encoded mathematically as a list, or vector, of numbers. The model takes these vectors through a series of computational layers, creating matrices of many numbers that, throughout each layer, are used to identify other words that are most likely to be used to respond to the original prompt. Eventually, the layers converge on a set of numbers that is decoded back into text, in the form of a natural language response.
The team’s approach trains RFMs to recognize numerical patterns in an LLM that could be associated with a specific concept. As an example, to see whether an LLM contains any representation of a “conspiracy theorist,” the researchers would first train the algorithm to identify patterns among LLM representations of 100 prompts that are clearly related to conspiracies, and 100 other prompts that are not. In this way, the algorithm would learn patterns associated with the conspiracy theorist concept. Then, the researchers can mathematically modulate the activity of the conspiracy theorist concept by perturbing LLM representations with these identified patterns.
The method can be applied to search for and manipulate any general concept in an LLM. Among many examples, the researchers identified representations and manipulated an LLM to give answers in the tone and perspective of a “conspiracy theorist.” They also identified and enhanced the concept of “anti-refusal,” and showed that whereas normally, a model would be programmed to refuse certain prompts, it instead answered, for instance giving instructions on how to rob a bank.
Radhakrishnan says the approach can be used to quickly search for and minimize vulnerabilities in LLMs. It can also be used to enhance certain traits, personalities, moods, or preferences, such as emphasizing the concept of “brevity” or “reasoning” in any response an LLM generates. The team has made the method’s underlying code publicly available.
“LLMs clearly have a lot of these abstract concepts stored within them, in some representation,” Radhakrishnan says. “There are ways where, if we understand these representations well enough, we can build highly specialized LLMs that are still safe to use but really effective at certain tasks.”
This work was supported, in part, by the National Science Foundation, the Simons Foundation, the TILOS institute, and the U.S. Office of Naval Research.
New study unveils the mechanism behind “boomerang” earthquakesThese ricocheting ruptures may be more common than previously thought.An earthquake typically sets off ruptures that ripple out from its underground origins. But on rare occasions, seismologists have observed quakes that reverse course, further shaking up areas that they passed through only seconds before. These “boomerang” earthquakes often occur in regions with complex fault systems. But a new study by MIT researchers predicts that such ricochet ruptures can occur even along simple faults.
The study, which appears today in the journal AGU Advances, reports that boomerang earthquakes can happen along a simple fault under several conditions: if the quake propagates out in just one direction, over a large enough distance, and if friction along the rupturing fault builds and subsides rapidly during the quake. Under these conditions, even a simple straight fault, like some segments of the San Andreas fault in California, could experience a boomerang quake.
These newly identified conditions are relatively common, suggesting that many earthquakes that have occurred along simple faults may have experienced a boomerang effect, or what scientists term “back-propagating fronts.”
“Our work suggests that these boomerang quakes may have been undetected in a number of cases,” says study author Yudong Sun, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “We do think this behavior may be more common than we have seen so far in the seismic data.”
The new results could help scientists better assess future hazards in simple fault zones where boomerang quakes could potentially strike twice.
“In most cases, it would be impossible for a person to tell that an earthquake has propagated back just from the ground shaking, because ground motion is complex and affected by many factors,” says co-author Camilla Cattania, the Cecil and Ida Green Career Development Professor of Geophysics at MIT. “However, we know that shaking is amplified in the direction of rupture, and buildings would shake more in response. So there is a real effect in terms of the damage that results. That’s why understanding where these boomerang events could occur matters.”
Keep it simple
There have been a handful of instances where scientists have recorded seismic data suggesting that a quake reversed direction. In 2016, an earthquake in the middle of the Atlantic Ocean rippled eastward, and then seconds later richocheted back west. Similar return rumblers may have occurred in 2011 during the magnitude 9 earthquake in Tohoku, Japan, and in 2023 during the destructive magnitude 7.8 quake in Turkey and Syria, among others.
These events took place in various fault regions, from complex zones of multiple intersecting fault lines to regions with just a single, straight fault. While seismologists have assumed that such complex quakes would be more likely to occur in multifault systems, the rare examples along simple faults got Sun and Cattania wondering: Could an earthquake reverse course along a simple fault? And if so, what could cause such a bounce-back in a seemingly simple system?
“When you see this boomerang-like behavior, it is tempting to explain this in terms of some complexity in the Earth,” Cattania says. “For instance, there may be many faults that interact, with earthquakes jumping between fault segments, or fault surfaces with prominent kinks and bends. In many cases, this could explain back-propagating behavior. But what we found was, you could have a very simple fault and still get this complex behavior.”

Faulty friction
In their new study, the team looked to simulate an earthquake along a simple fault system. In geology, a fault is a crack or fracture that runs through the Earth’s crust. An earthquake begins when the stress between rocks on either side of the fault, suddenly decreases, and one side slides against the other, setting off seismic waves that rupture rocks all along the fault. This seismic activity, which initiates deep in the crust, can sometimes reach and shake up the surface.
Cattania and Sun used a computer model to represent the fundamental physics at play during an earthquake along a simple fault. In their model, they simulated the Earth’s crust as a simple elastic material, in which they embedded a single straight fault. They then simulated how the fault would exhibit an earthquake under different scenarios. For instance, the team varied the length of the fault and the location of the quake’s initation point below the surface, as well as whether the quake traveled in one versus two directions.
Over multiple simulations, they observed that only the unilateral quakes — those that traveled in one direction — exhibited a boomerang effect. Specifically, these quakes seemed to include a type that seismologists term “back-propagating” events, in which the rumbler splits at some point along the fault, partly continuing in the same direction and partly reversing back the way it came.
“When you look at a simulation, sometimes you don’t fully understand what causes a given behavior,” Cattania says. “So we developed mathematical models to understand it. And we went back and forth, to ultimately develop a simple theory that tells you should only see this back-propagation under these certain conditions.”
Those conditions, as the team’s new theory lays out, have to do with the friction along the fault. In standard earthquake physics, it’s generally understood that an earthquake is triggered when the stress built up between rocks on either side of a fault, is suddenly released. Rocks slide against each other in response, decreasing a fault’s friction. The reduction in fault friction creates a positive feedback that facilitates further sliding, sustaining the earthquake.
However, in their simulations, the team observed that when a quake travels along a fault in one direction, it can back-propagate when friction along the fault goes down, then up, and then down again.
“When the quake propagates in one direction, it produces a “breaking’’ effect that reduces the sliding velocity, increases friction, and allows only a narrow section of the fault to slide at a time,” Cattania says. “The region behind the quake, which stops sliding, can then rupture again, because it has accumulated more stress to slide again.”
The team found that, in addition to traveling in one direction and along a fault with changing friction, a boomerang is likely to occur if a quake has traveled over a large enough distance.
“This implies that large earthquakes are not simply ‘scaled-up’ versions of small earthquakes, but instead they have their own unique rupture behavior,” Sun says.
The team suspects that back-propagating quakes may be more common than scientists have thought, and they may occur along simple, straight faults, which are typically older than more complex fault systems.
“You shouldn’t only expect this complex behavior on a young, complex fault system. You can also see it on mature, simple faults,” Cattania says. “The key open question now is how often rupture reversals, or ‘boomerang’ earthquakes, occur in nature. Many observational studies so far have used methods that can’t detect back-propagating fronts. Our work motivates actively looking for them, to further advance our understanding of earthquake physics and ultimately mitigate seismic risk.”
MIT community members elected to the National Academy of Engineering for 2026Seven faculty members, along with 12 additional alumni, are honored for significant contributions to engineering research, practice, and education.Seven MIT researchers are among the 130 new members and 28 international members recently elected to the National Academy of Engineering (NAE) for 2026. Twelve additional MIT alumni were also elected as new members.
One of the highest professional distinctions for engineers, membership in the NAE is given to individuals who have made outstanding contributions to “engineering research, practice, or education,” and to “the pioneering of new and developing fields of technology, making major advancements in traditional fields of engineering, or developing/implementing innovative approaches to engineering education.”
The seven MIT electees this year include:
Moungi Gabriel Bawendi, the Lester Wolfe Professor of Chemistry in the Department of Chemistry, was honored for the synthesis and characterization of semiconductor quantum dots and their applications in displays, photovoltaics, and biology.
Charles Harvey, a professor in the Department of Civil and Environmental Engineering, was honored for contributions to hydrogeology regarding groundwater arsenic contamination, transport, and consequences.
Piotr Indyk, the Thomas D. and Virginia W. Cabot Professor in the Department of Electrical Engineering and Computer Science and a member of the Computer Science and Artificial Intelligence Laboratory, was honored for contributions to approximate nearest neighbor search, streaming, and sketching algorithms for massive data processing.
John Henry Lienhard, the Abdul Latif Jameel Professor of Water and Mechanical Engineering in the Department of Mechanical Engineering, was honored for advances and technological innovations in desalination.
Ram Sasisekharan, the Alfred H. Caspary Professor of Biological Physics and Physics in the Department of Biological Engineering, was honored for discovering the U.S. heparin contaminant in 2008 and creating clinical antibodies for Zika, dengue, SARS-CoV-2, and other diseases.
Frances Ross, the TDK Professor in the Department of Materials Science and Engineering, was honored for ultra-high vacuum and liquid-cell transmission electron microscopies and their worldwide adoptions for materials research and semiconductor technology development.
Zoltán Sandor Spakovszky SM ’99, PhD ’01, the T. Wilson (1953) Professor in Aeronautics in the Department of Aeronautics and Astronautics, was honored for contributions, through rigorous discoveries and advancements, in aeroengine aerodynamic and aerostructural stability and acoustics.
“Each of the MIT faculty and alumni elected to the National Academy of Engineering has made extraordinary contributions to their fields through research, education, and innovation,” says Paula T. Hammond, dean of the School of Engineering and Institute Professor in the Department of Chemical Engineering. "They represent the breadth of excellence we have here at MIT. This honor reflects the impact of their work, and I’m proud to celebrate their achievement and offer my warmest congratulations.”
Twelve additional alumni were elected to the National Academy of Engineering this year. They are: Anne Hammons Aunins PhD ’91; Lars James Blackmore PhD ’07; John-Paul Clarke ’91, SM ’92, SCD ’97; Michael Fardis SM ’77, SM ’78, PhD ’79; David Hays PhD ’98; Stephen Thomas Kent ’76, EE ’78, ENG ’78, PhD ’81; Randal D. Koster SM ’85, SCD ’88; Fred Mannering PhD ’83; Peyman Milanfar SM ’91, EE ’93, ENG ’93, PhD ’93; Amnon Shashua PhD ’93; Michael Paul Thien SCD ’88; and Terry A. Winograd PhD ’70.
AI algorithm enables tracking of vital white matter pathwaysOpening a new window on the brainstem, a new tool reliably and finely resolves distinct nerve bundles in live diffusion MRI scans, revealing signs of injury or disease.The signals that drive many of the brain and body’s most essential functions — consciousness, sleep, breathing, heart rate, and motion — course through bundles of “white matter” fibers in the brainstem, but imaging systems so far have been unable to finely resolve these crucial neural cables. That has left researchers and doctors with little capability to assess how they are affected by trauma or neurodegeneration.
In a new study, a team of MIT, Harvard University, and Massachusetts General Hospital researchers unveil AI-powered software capable of automatically segmenting eight distinct bundles in any diffusion MRI sequence.
In the open-access study, published Feb. 6 in the Proceedings of the National Academy Sciences, the research team led by MIT graduate student Mark Olchanyi reports that their BrainStem Bundle Tool (BSBT), which they’ve made publicly available, revealed distinct patterns of structural changes in patients with Parkinson’s disease, multiple sclerosis, and traumatic brain injury, and shed light on Alzheimer’s disease as well. Moreover, the study shows, BSBT retrospectively enabled tracking of bundle healing in a coma patient that reflected the patient’s seven-month road to recovery.
“The brainstem is a region of the brain that is essentially not explored because it is tough to image,” says Olchanyi, a doctoral candidate in MIT’s Medical Engineering and Medical Physics Program. “People don't really understand its makeup from an imaging perspective. We need to understand what the organization of the white matter is in humans and how this organization breaks down in certain disorders.”
Adds Professor Emery N. Brown, Olchanyi’s thesis supervisor and co-senior author of the study, “the brainstem is one of the body’s most important control centers. Mark’s algorithms are a significant contribution to imaging research and to our ability to the understand regulation of fundamental physiology. By enhancing our capacity to image the brainstem, he offers us new access to vital physiological functions such as control of the respiratory and cardiovascular systems, temperature regulation, how we stay awake during the day and how sleep at night.”
Brown is the Edward Hood Taplin Professor of Computational Neuroscience and Medical Engineering in The Picower Institute for Learning and Memory, the Institute for Medical Engineering and Science, and the Department of Brain and Cognitive Sciences at MIT. He is also an anesthesiologist at MGH and a professor at Harvard Medical School.
Building the algorithm
Diffusion MRI helps trace the long branches, or “axons,” that neurons extend to communicate with each other. Axons are typically clad in a sheath of fat called myelin, and water diffuses along the axons within the myelin, which is also called the brain’s “white matter.” Diffusion MRI can highlight this very directed displacement of water. But segmenting the distinct bundles of axons in the brainstem has proved challenging, because they are small and masked by flows of brain fluids and the motions produced by breathing and heart beats.
As part of his thesis work to better understand the neural mechanisms that underpin consciousness, Olchanyi wanted to develop an AI algorithm to overcome these obstacles. BSBT works by tracing fiber bundles that plunge into the brainstem from neighboring areas higher in the brain, such as the thalamus and the cerebellum, to produce a “probabilistic fiber map.” An artificial intelligence module called a “convolutional neural network” then combines the map with several channels of imaging information from within the brainstem to distinguish eight individual bundles.
To train the neural network to segment the bundles, Olchanyi “showed” it 30 live diffusion MRI scans from volunteers in the Human Connectome Project (HCP). The scans were manually annotated to teach the neural network how to identify the bundles. Then he validated BSBT by testing its output against “ground truth” dissections of post-mortem human brains where the bundles were well delineated via microscopic inspection or very slow but ultra-high-resolution imaging. After training, BSBT became proficient in automatically identifying the eight distinct fiber bundles in new scans.
In an experiment to test its consistency and reliability, Olchanyi tasked BSBT with finding the bundles in 40 volunteers who underwent separate scans two months apart. In each case, the tool was able to find the same bundles in the same patients in each of their two scans. Olchanyi also tested BSBT with multiple datasets (not just the HCP), and even inspected how each component of the neural network contributed to BSBT’s analysis by hobbling them one by one.
“We put the neural network through the wringer,” Olchanyi says. “We wanted to make sure that it’s actually doing these plausible segmentations and it is leveraging each of its individual components in a way that improves the accuracy.”
Potential novel biomarkers
Once the algorithm was properly trained and validated, the research team moved on to testing whether the ability to segment distinct fiber bundles in diffusion MRI scans could enable tracking of how each bundle’s volume and structure varied with disease or injury, creating a novel kind of biomarker. Although the brainstem has been difficult to examine in detail, many studies show that neurodegenerative diseases affect the brainstem, often early on in their progression.
Olchanyi, Brown and their co-authors applied BSBT to scores of datasets of diffusion MRI scans from patients with Alzheimer’s, Parkinson’s, MS, and traumatic brain injury (TBI). Patients were compared to controls and sometimes to themselves over time. In the scans, the tool measured bundle volume and “fractional anisotropy,” (FA) which tracks how much water is flowing along the myelinated axons versus how much is diffusing in other directions, a proxy for white matter structural integrity.
In each condition, the tool found consistent patterns of changes in the bundles. While only one bundle showed significant decline in Alzheimer’s, in Parkinson’s the tool revealed a reduction in FA in three of the eight bundles. It also revealed volume loss in another bundle in patients between a baseline scan and a two-year follow-up. Patients with MS showed their greatest FA reductions in four bundles and volume loss in three. Meanwhile, TBI patients didn’t show significant volume loss in any bundles, but FA reductions were apparent in the majority of bundles.
Testing in the study showed that BSBT proved more accurate than other classifier methods in discriminating between patients with health conditions versus controls.
BSBT, therefore, can be “a key adjunct that aids current diagnostic imaging methods by providing a fine-grained assessment of brainstem white matter structure and, in some cases, longitudinal information,” the authors wrote.
Finally, in the case of a 29-year-old man who suffered a severe TBI, Olchanyi applied BSBT to a scans taken during the man’s seven-month coma. The tool showed that the man’s brainstem bundles had been displaced, but not cut, and showed that over his coma, the lesions on the nerve bundles decreased by a factor of three in volume. As they healed, the bundles moved back into place as well.
The authors wrote that BSBT “has substantial prognostic potential by identifying preserved brainstem bundles that can facilitate coma recovery.”
The study’s other senior authors are Juan Eugenio Iglesias and Brian Edlow. Other co-authors are David Schreier, Jian Li, Chiara Maffei, Annabel Sorby-Adams, Hannah Kinney, Brian Healy, Holly Freeman, Jared Shless, Christophe Destrieux, and Hendry Tregidgo.
Funding for the study came from the National Institutes of Health, U.S. Department of Defense, James S. McDonnell Foundation, Rappaport Foundation, American SidS Institute, American Brain Foundation, American Academy of Neurology, Center for Integration of Medicine and Innovative Technology, Blueprint for Neuroscience Research, and Massachusetts Life Sciences Center.
Some early life forms may have breathed oxygen well before it filled the atmosphereA new study suggests aerobic respiration began hundreds of millions of years earlier than previously thought.Oxygen is a vital and constant presence on Earth today. But that hasn’t always been the case. It wasn’t until around 2.3 billion years ago that oxygen became a permanent fixture in the atmosphere, during a pivotal period known as the Great Oxidation Event (GOE), which set the evolutionary course for oxygen-breathing life as we know it today.
A new study by MIT researchers suggests some early forms of life may have evolved the ability to use oxygen hundreds of millions of years before the GOE. The findings may represent some of the earliest evidence of aerobic respiration on Earth.
In a study appearing today in the journal Palaeogeography, Palaeoclimatology, Palaeoecology, MIT geobiologists traced the evolutionary origins of a key enzyme that enables organisms to use oxygen. The enzyme is found in the vast majority of aerobic, oxygen-breathing life forms today. The team discovered that this enzyme evolved during the Mesoarchean — a geological period that predates the Great Oxidation Event by hundreds of millions of years.
The team’s results may help to explain a longstanding puzzle in Earth’s history: Why did it take so long for oxygen to build up in the atmosphere?
The very first producers of oxygen on the planet were cyanobacteria — microbes that evolved the ability to use sunlight and water to photosynthesize, releasing oxygen as a byproduct. Scientists have determined that cyanobacteria emerged around 2.9 billion years ago. The microbes, then, were presumably churning out oxygen for hundreds of millions of years before the Great Oxidation Event. So, where did all of cyanobacteria’s early oxygen go?
Scientists suspect that rocks may have drawn down a large portion of oxygen early on, through various geochemical reactions. The MIT team’s new study now suggests that biology may have also played a role.
The researchers found that some organisms may have evolved the enzyme to use oxygen hundreds of millions of years before the Great Oxidation Event. This enzyme may have enabled the organisms living near cyanobacteria to gobble up any small amounts of oxygen that the microbes produced, in turn delaying oxygen’s accumulation in the atmosphere for hundreds of millions of years.
“This does dramatically change the story of aerobic respiration,” says study co-author Fatima Husain, a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “Our study adds to this very recently emerging story that life may have used oxygen much earlier than previously thought. It shows us how incredibly innovative life is at all periods in Earth’s history.”
The study’s other co-authors include Gregory Fournier, associate professor of geobiology at MIT, along with Haitao Shang and Stilianos Louca of the University of Oregon.
First respirers
The new study adds to a long line of work at MIT aiming to piece together oxygen’s history on Earth. This body of research has helped to pin down the timing of the Great Oxidation Event as well as the first evidence of oxygen-producing cyanobacteria. The overall understanding that has emerged is that oxygen was first produced by cyanobacteria around 2.9 billion years ago, while the Great Oxidation Event — when oxygen finally accumulated enough to persist in the atmosphere — took place much later, around 2.33 billion years ago.
For Husain and her colleagues, this apparent delay between oxygen’s first production and its eventual persistence inspired a question.
“We know that the microorganisms that produce oxygen were around well before the Great Oxidation Event,” Husain says. “So it was natural to ask, was there any life around at that time that could have been capable of using that oxygen for aerobic respiration?”
If there were in fact some life forms that were using oxygen, even in small amounts, they might have played a role in keeping oxygen from building up in the atmosphere, at least for a while.
To investigate this possibility, the MIT team looked to heme-copper oxygen reductases, which are a set of enzymes that are essential for aerobic respiration. The enzymes act to reduce oxygen to water, and they are found in the majority of aerobic, oxygen-breathing organism today, from bacteria to humans.
“We targeted the core of this enzyme for our analyses because that’s where the reaction with oxygen is actually taking place,” Husain explains.
Tree dates
The team aimed to trace the enzyme’s evolution backward in time to see when the enzyme first emerged to enable organisms to use oxygen. They first identified the enzyme’s genetic sequence and then used an automated search tool to look for this same sequence in databases containing the genomes of millions of different species of organisms.
“The hardest part of this work was that we had too much data,” Fournier says. “This enzyme is just everywhere and is present in most modern living organism. So we had to sample and filter the data down to a dataset that was representative of the diversity of modern life and also small enough to do computation with, which is not trivial.”
The team ultimately isolated the enzyme’s sequence from several thousand modern species and mapped these sequences onto an evolutionary tree of life, based on what scientists know about when each respective species has likely evolved and branched off. They then looked through this tree for specific species that might offer related information about their origins.
If, for instance, there is a fossil record for a particular organism on the tree, that record would include an estimate of when that organism appeared on Earth. The team would use that fossil’s age to “pin” a date to that organism on the tree. In a similar way, they could place pins across the tree to effectively tighten their estimates for when in time the enzyme evolved from one species to the next.
In the end, the researchers were able to trace the enzyme as far back as the Mesoarchean — a geological era that lasted from 3.2 to 2.8 billion years ago. It’s around this time that the team suspects the enzyme — and organisms’ ability to use oxygen — first emerged. This period predates the Great Oxidation Event by several hundred million years.
The new findings suggest that, shortly after cyanobacteria evolved the ability to produce oxygen, other living things evolved the enzyme to use that oxygen. Any such organism that happened to live near cyanobacteria would have been able to quickly take up the oxygen that the bacteria churned out. These early aerobic organisms may have then played some role in preventing oxygen from escaping to the atmosphere, delaying its accumulation for hundreds of millions of years.
“Considered all together, MIT research has filled in the gaps in our knowledge of how Earth’s oxygenation proceeded,” Husain says. “The puzzle pieces are fitting together and really underscore how life was able to diversify and live in this new, oxygenated world.”
This research was supported, in part, by the Research Corporation for Science Advancement Scialog program.
A satellite language network in the brainResearchers find a component of the brain’s dedicated language network in the cerebellum, a region better known for coordinating movement.The ability to use language to communicate is one of things that makes us human. At MIT’s McGovern Institute for Brain Research, scientists led by Evelina Fedorenko have defined an entire network of areas within the brain dedicated to this ability, which work together when we speak, listen, read, write, or sign.
Much of the language network lies within the brain’s neocortex, where many of our most sophisticated cognitive functions are carried out. Now, Fedorenko’s lab, which is part of MIT's Department of Brain and Cognitive Sciences, has identified language-processing regions within the cerebellum, extending the language network to a part of the brain better known for helping to coordinate the body’s movements. Their findings are reported Jan. 21 in the journal Neuron.
“It’s like there’s this region in the cerebellum that we’ve been forgetting about for a long time,” says Colton Casto, a graduate student at Harvard and MIT who works in Fedorenko’s lab. “If you’re a language researcher, you should be paying attention to the cerebellum.”
Imaging the language network
There have been hints that the cerebellum makes important contributions to language. Some functional imaging studies detected activity in this area during language use, and people who suffer damage to the cerebellum sometimes experience language impairments. But no one had been able to pin down exactly which parts of the cerebellum were involved, or tease out their roles in language processing.
To get some answers, Fedorenko’s lab took a systematic approach, using methods they have used to map the language network in the neocortex. For 15 years, the lab has captured functional brain imaging data as volunteers carried out various tasks inside an MRI scanner. By monitoring brain activity as people engaged in different kinds of language tasks, like reading sentences or listening to spoken words, as well as non-linguistic tasks, like listening to noise or memorizing spatial patterns, the team has been able identify parts of the brain that are exclusively dedicated to language processing.
Their work shows that everyone’s language network uses the same neocortical regions. The precise anatomical location of these regions varies, however, so to study the language network in any individual, Fedorenko and her team must map that person’s network inside an MRI scanner using their language-localizer tasks.
Satellite language network
While the Fedorenko lab has largely focused on how the neocortex contributes to language processing, their brain scans also capture activity in the cerebellum. So Casto revisited those scans, analyzing cerebellar activity from more than 800 people to look for regions involved in language processing. Fedorenko points out that teasing out the individual anatomy of the language network turned out to particularly vital in the cerebellum, where neurons are densely packed and areas with different functional specializations sit very close to one another. Ultimately, Casto was able to identify four cerebellar areas that consistently got involved during language use.
Three of these regions were clearly involved in language use, but also reliably became engaged during certain kinds of non-linguistic tasks. Casto says this was a surprise, because all the core language areas in the neocortex are dedicated exclusively to language processing. The researchers speculate that the cerebellum may be integrating information from different parts of the cortex — a function that could be important for many cognitive tasks.
“We’ve found that language is distinct from many, many other things — but at some point, complex cognition requires everything to work together,” Fedorenko says. “How do these different kinds of information get connected? Maybe parts of the cerebellum serve that function.”
The researchers also found a spot in the right posterior cerebellum with activity patterns that more closely echoed those of the language network in the neocortex. This region stayed silent during non-linguistic tasks, but became active during language use. For all of the linguistic activities that Casto analyzed, this region exhibited patterns of activity that were very similar to what the lab has seen in neocortical components of the language network. “Its contribution to language seems pretty similar,” Casto says. The team describes this area as a “cerebellar satellite” of the language network.
Still, the researchers think it’s unlikely that neurons in the cerebellum, which are organized very differently than those in the neocortex, replicate the precise function of other parts of the language network. Fedorenko’s team plans to explore the function of this satellite region more deeply, investigating whether it may participate in different kinds of tasks.
The researchers are also exploring the possibility that the cerebellum is particularly important for language learning — playing an outsized role during development, or when people learn languages later in life.
Fedorenko says the discovery may also have implications for treating language impairments caused when an injury or disease damages the brain’s neocortical language network. “This area may provide a very interesting potential target to help recovery from aphasia,” Fedorenko says.
Currently, researchers are exploring the possibility that non-invasively stimulating language-associated parts of the brain might promote language recovery. “This right cerebellar region may be just the right thing to potentially stimulate to up-regulate some of that function that’s lost,” Fedorenko says.
Terahertz microscope reveals the motion of superconducting electronsFor the first time, the new scope allowed physicists to observe terahertz “jiggles” in a superconducting fluid.You can tell a lot about a material based on the type of light you shine at it: Optical light illuminates a material’s surface, while X-rays reveal its internal structures and infrared captures a material’s radiating heat.
Now, MIT physicists have used terahertz light to reveal inherent, quantum vibrations in a superconducting material, which have not been observable until now.
Terahertz light is a form of energy that lies between microwaves and infrared radiation on the electromagnetic spectrum. It oscillates over a trillion times per second — just the right pace to match how atoms and electrons naturally vibrate inside materials. Ideally, this makes terahertz light the perfect tool to probe these motions.
But while the frequency is right, the wavelength — the distance over which the wave repeats in space — is not. Terahertz waves have wavelengths hundreds of microns long. Because the smallest spot that any kind of light can be focused into is limited by its wavelength, terahertz beams cannot be tightly confined. As a result, a focused terahertz beam is physically too large to interact effectively with microscopic samples, simply washing over these tiny structures without revealing fine detail.
In a paper appearing today in the journal Nature, the scientists report that they have developed a new terahertz microscope that compresses terahertz light down to microscopic dimensions. This pinpoint of terahertz light can resolve quantum details in materials that were previously inaccessible.
The team used the new microscope to send terahertz light into a sample of bismuth strontium calcium copper oxide, or BSCCO (pronounced “BIS-co”) — a material that superconducts at relatively high temperatures. With the terahertz scope, the team observed a frictionless “superfluid” of superconducting electrons that were collectively jiggling back and forth at terahertz frequencies within the BSCCO material.
“This new microscope now allows us to see a new mode of superconducting electrons that nobody has ever seen before,” says Nuh Gedik, the Donner Professor of Physics at MIT.
By using terahertz light to probe BSCCO and other superconductors, scientists can gain a better understanding of properties that could lead to long-coveted room-temperature superconductors. The new microscope can also help to identify materials that emit and receive terahertz radiation. Such materials could be the foundation of future wireless, terahertz-based communications, that could potentially transmit more data at faster rates compared to today’s microwave-based communications.
“There’s a huge push to take Wi-Fi or telecommunications to the next level, to terahertz frequencies,” says Alexander von Hoegen, a postdoc in MIT’s Materials Research Laboratory and lead author of the study. “If you have a terahertz microscope, you could study how terahertz light interacts with microscopically small devices that could serve as future antennas or receivers.”
In addition to Gedik and von Hoegen, the study’s MIT co-authors include Tommy Tai, Clifford Allington, Matthew Yeung, Jacob Pettine, Alexander Kossak, Byunghun Lee, and Geoffrey Beach, along with collaborators at Harvard University, the Max Planck Institute for the Structure and Dynamics of Matter, the Max Planck Institute for the Physics of Complex Systems and the Brookhaven National Lab.
Hitting a limit
Terahertz light is a promising yet largely untapped imaging tool. It occupies a unique spectral “sweet spot”: Like microwaves, radio waves, and visible light, terahertz radiation is nonionizing and therefore does not carry enough energy to cause harmful radiation effects, making it safe for use in humans and biological tissues. At the same time, much like X-rays, terahertz waves can penetrate a wide range of materials, including fabric, wood, cardboard, plastic, ceramics, and even thin brick walls.
Owing to these distinctive properties, terahertz light is being actively explored for applications in security screening, medical imaging, and wireless communications. In contrast, far less effort has been devoted to applying terahertz radiation to microscopy and the illumination of microscopic phenomena. The primary reason is a fundamental limitation shared by all forms of light: the diffraction limit, which restricts spatial resolution to roughly the wavelength of the radiation used.
With wavelengths on the order of hundreds of microns, terahertz radiation is far larger than atoms, molecules, and many other microscopic structures. As a result, its ability to directly resolve microscale features is fundamentally constrained.
“Our main motivation is this problem that, you might have a 10-micron sample, but your terahertz light has a 100-micron wavelength, so what you would mostly be measuring is air, or the vacuum around your sample,” von Hoegen explains. “You would be missing all these quantum phases that have characteristic fingerprints in the terahertz regime.”
Zooming in
The team found a way around the terahertz diffraction limit by using spintronic emitters — a recent technology that produces sharp pulses of terahertz light. Spintronic emitters are made from multiple ultrathin metallic layers. When a laser illuminates the multilayered structure, the light triggers a cascade of effects in the electrons within each layer, such that the structure ultimately emits a pulse of energy at terahertz frequencies.
By holding a sample close to the emitter, the team trapped the terahertz light before it had a chance to spread, essentially squeezing it into a space much smaller than its wavelength. In this regime, the light can bypass the diffraction limit to resolve features that were previously too small to see.
The MIT team adapted this technology to observe microscopic, quantum-scale phenomena. For their new study, the team developed a terahertz microscope using spintronic emitters interfaced with a Bragg mirror. This multilayered structure of reflective films successively filters out certain, undesired wavelengths of light while letting through others, protecting the sample from the “harmful” laser which triggers the terahertz emission.
As a demonstration, the team used the new microscope to image a small, atomically thin sample of BSCCO. They placed the sample very close to the terahertz source and imaged it at temperatures close to absolute zero — cold enough for the material to become a superconductor. To create the image, they scanned the laser beam, sending terahertz light through the sample and looking for the specific signatures left by the superconducting electrons.
“We see the terahertz field gets dramatically distorted, with little oscillations following the main pulse,” von Hoegen says. “That tells us that something in the sample is emitting terahertz light, after it got kicked by our initial terahertz pulse.”
With further analysis, the team concluded that the terahertz microscope was observing the natural, collective terahertz oscillations of superconducting electrons within the material.
“It’s this superconducting gel that we’re sort of seeing jiggle,” von Hoegen says.
This jiggling superfluid was expected, but never directly visualized until now. The team is now applying the microscope to other two-dimensional materials, where they hope to capture more terahertz phenomena.
“There are a lot of the fundamental excitations, like lattice vibrations and magnetic processes, and all these collective modes that happen at terahertz frequencies,” von Hoegen says. “We can now resonantly zoom in on these interesting physics with our terahertz microscope.”
This research was supported, in part, by the MIT Research Laboratory of Electronics, the U.S. Department of Energy, and the Gordon and Betty Moore Foundation. Fabrication was carried out with the use of MIT.nano.
Katie Spivakovsky wins 2026 Churchill ScholarshipThe MIT senior will pursue a master’s degree at Cambridge University in the U.K. this fall.MIT senior Katie Spivakovsky has been selected as a 2026-27 Churchill Scholar and will undertake an MPhil in biological sciences at the Wellcome Sanger Institute at Cambridge University in the U.K. this fall.
Spivakovsky, who is double-majoring in biological engineering and artificial intelligence, with minors in mathematics and biology, aims to integrate computation and bioengineering in an academic research career focused on developing robust, scalable solutions that promote equitable health outcomes.
At MIT’s Bathe BioNanoLab, Spivakovsky investigates therapeutic applications of DNA origami, DNA-scaffolded nanoparticles for gene and mRNA delivery, and co-authored a manuscript in press at Science. She leads the development of an immune therapy for cancer cachexia with a team supported by MIT’s BioMakerSpace; this work earned a silver medal at the international synthetic biology competition iGEM and was published in the MIT Undergraduate Research Journal. Previously, she worked on Merck’s Modeling & Informatics team, characterizing a cancer-associated protein mutation, and at the New York Structural Biology Center, where she improved cryogenic electron microscopy particle detection models.
On campus, Spivakovsky serves as director of the Undergraduate Initiative in the MIT Biotech Group. She is deeply committed to teaching and mentoring, and has served as a lecturer and co-director for class 6.S095 (Probability Problem Solving), a teaching assistant for classes 20.309 (Bioinstrumentation) and 20.A06 (Hands-on Making in Biological Engineering), a lab assistant for 6.300 (Signal Processing), and as an associate advisor.
“Katie is a brilliant researcher who has a keen intellectual curiosity that will make her a leader in biological engineering in the future. We are proud that she will be representing MIT at Cambridge University,” says Kim Benard, associate dean of distinguished fellowships.
The Churchill Scholarship is a highly competitive fellowship that annually offers 16 American students the opportunity to pursue a funded graduate degree in science, mathematics, or engineering at Churchill College within Cambridge University. The scholarship, established in 1963, honors former British Prime Minister Winston Churchill’s vision for U.S.-U.K. scientific exchange. Since 2017, two Kanders Churchill Scholarships have also been awarded each year for studies in science policy.
MIT students interested in learning more about the Churchill Scholarship should contact Kim Benard in MIT Career Advising and Professional Development.
How a unique class of neurons may set the table for brain developmentSomatostatin-expressing neurons follow a unique trajectory when forming connections in the visual cortex that may help establish the conditions needed for sensory experience to refine circuits.The way the brain develops can shape us throughout our lives, so neuroscientists are intensely curious about how it happens. A new study by researchers in The Picower Institute for Learning and Memory at MIT that focused on visual cortex development in mice reveals that an important class of neurons follows a set of rules that, while surprising, might just create the right conditions for circuit optimization.
During early brain development, multiple types of neurons emerge in the visual cortex (where the brain processes vision). Many are “excitatory,” driving the activity of brain circuits, and others are “inhibitory,” meaning they control that activity. Just like a car needs not only an engine and a gas pedal, but also a steering wheel and brakes, a healthy balance between excitation and inhibition is required for proper brain function. During a “critical period” of development in the visual cortex, soon after the eyes first open, excitatory and inhibitory neurons forge and edit millions of connections, or synapses, to adapt nascent circuits to the incoming flood of visual experience. Over many days, in other words, the brain optimizes its attunement to the world.
In the new study in The Journal of Neuroscience, a team led by MIT research scientist Josiah Boivin and Professor Elly Nedivi visually tracked somatostatin (SST)-expressing inhibitory neurons forging synapses with excitatory cells along their sprawling dendrite branches, illustrating the action before, during, and after the critical period with unprecedented resolution. Several of the rules the SST cells appeared to follow were unexpected — for instance, unlike other cell types, their activity did not depend on visual input — but now that the scientists know these neurons’ unique trajectory, they have a new idea about how it may enable sensory activity to influence development: SST cells might help usher in the critical period by establishing the baseline level of inhibition needed to ensure that only certain types of sensory input will trigger circuit refinement.
“Why would you need part of the circuit that’s not really sensitive to experience? It could be that it’s setting things up for the experience-dependent components to do their thing,” says Nedivi, the William R. and Linda R. Young Professor in the Picower Institute and MIT’s departments of Biology and Brain and Cognitive Sciences.
Boivin adds: “We don’t yet know whether SST neurons play a causal role in the opening of the critical period, but they are certainly in the right place at the right time to sculpt cortical circuitry at a crucial developmental stage.”
A unique trajectory
To visualize SST-to-excitatory synapse development, Nedivi and Boivin’s team used a genetic technique that pairs expression of synaptic proteins with fluorescent molecules to resolve the appearance of the “boutons” SST cells use to reach out to excitatory neurons. They then performed a technique called eMAP, developed by Kwanghun Chung’s lab in the Picower Institute, that expands and clears brain tissue to increase magnification, allowing super-resolution visualization of the actual synapses those boutons ultimately formed with excitatory cells along their dendrites. Co-author and postdoc Bettina Schmerl helped lead the eMAP work.
These new techniques revealed that SST bouton appearance and then synapse formation surged dramatically when the eyes opened, and then as the critical period got underway. But while excitatory neurons during this time frame are still maturing, first in the deepest layers of the cortex and later in its more superficial layers, the SST boutons blanketed all layers simultaneously, meaning that, perhaps counterintuitively, they sought to establish their inhibitory influence regardless of the maturation stage of their intended partners.
Many studies have shown that eye opening and the onset of visual experience sets in motion the development and elaboration of excitatory cells and another major inhibitory neuron type (parvalbumin-expressing cells). Raising mice in the dark for different lengths of time, for instance, can distinctly alter what happens with these cells. Not so for the SST neurons. The new study showed that varying lengths of darkness had no effect on the trajectory of SST bouton and synapse appearance; it remained invariant, suggesting it is preordained by a genetic program or an age-related molecular signal, rather than experience.
Moreover, after the initial frenzy of synapse formation during development, many synapses are then edited, or pruned away, so that only the ones needed for appropriate sensory responses endure. Again, the SST boutons and synapses proved to be exempt from these redactions. Although the pace of new SST synapse formation slowed at the peak of the critical period, the net number of synapses never declined, and even continued increasing into adulthood.
“While a lot of people think that the only difference between inhibition and excitation is their valence, this demonstrates that inhibition works by a totally different set of rules,” Nedivi says.
In all, while other cell types were tailoring their synaptic populations to incoming experience, the SST neurons appeared to provide an early but steady inhibitory influence across all layers of the cortex. After excitatory synapses have been pruned back by the time of adulthood, the continued upward trickle of SST inhibition may contribute to the increase in the inhibition to excitation ratio that still allows the adult brain to learn, but not as dramatically or as flexibly as during early childhood.
A platform for future studies
In addition to shedding light on typical brain development, Nedivi says, the study’s techniques can enable side-by-side comparisons in mouse models of neurodevelopmental disorders such as autism or epilepsy, where aberrations of excitation and inhibition balance are implicated.
Future studies using the techniques can also look at how different cell types connect with each other in brain regions other than the visual cortex, she adds.
Boivin, who will soon open his own lab as a faculty member at Amherst College, says he is eager to apply the work in new ways.
“I’m excited to continue investigating inhibitory synapse formation on genetically defined cell types in my future lab,” Boivin says. “I plan to focus on the development of limbic brain regions that regulate behaviors relevant to adolescent mental health.”
In addition to Nedivi, Boivin and Schmerl, the paper’s other authors are Kendyll Martin and Chia-Fang Lee.
Funding for the study came from the National Institutes of Health, the Office of Naval Research, and the Freedom Together Foundation.
Q&A: A simpler way to understand syntaxA new book by Professor Ted Gibson brings together his years of teaching and research to detail the rules of how words combine.For decades, MIT Professor Ted Gibson has taught the meaning of language to first-year graduate students in the Department of Brain and Cognitive Sciences (BCS). A new book, Gibson’s first, brings together his years of teaching and research to detail the rules of how words combine.
“Syntax: A Cognitive Approach,” released by MIT Press on Dec. 16, lays out the grammar of a language from the perspective of a cognitive scientist, outlining the components of language structure and the model of syntax that Gibson advocates: dependency grammar.
It was his research collaborator and wife, associate professor of BCS and McGovern Institute for Brain Research investigator Ev Fedorenko, who encouraged him to put pen to paper. Here, Gibson takes some time to discuss the book.
Q: Where did the process for “Syntax” begin?
A: I think it started with my teaching. Course 9.012 (Cognitive Science), which I teach with Josh Tenenbaum and Pawan Sinha, divides language into three components: sound, structure, and meaning. I work on the structure and meaning parts of language: words and how they get put together. That’s called syntax.
I’ve spent a lot of time over the last 30 years trying to understand the compositional rules of syntax, and even though there are many grammar rules in any language, I actually don’t think the form for grammar rules is that complicated. I’ve taught it in a very simple way for many years, but I’ve never written it all down in one place. My wife, Ev, is a longtime collaborator, and she suggested I write a paper. It turned into a book.
Q: How do you like to explain syntax?
A: For any sentence, for any utterance in any human language, there’s always going to be a word that serves as the head of that sentence, and every other other word will somehow depend on that headword, maybe as an immediate dependent, or further away, through some other dependent words. This is called dependency grammar; it means there’s a root word in each sentence, and dependents of that root, on down, for all the words in the sentence, form a simple tree structure. I have cognitive reasons to suggest that this model is correct, but it isn’t my model; it was first proposed in the 1950s. I adopted it because it aligns with human cognitive phenomena.
That very simple framework gives you the following observation: that longer-distance connections between words are harder to produce and understand than shorter-distance ones. This is because of limitations in human memory. The closer the words are together, the easier it is for me to produce them in a sentence, and the easier it is for you to understand them. If they’re far apart, then it’s a complicated memory problem to produce and understand them.
This gives rise to a cool observation: Languages optimize their rules in order to keep the words close together. We can have very different orders of the same elements across languages, such as the difference in word orders for English versus Japanese, where the order of the words in the English sentence “Mary eats an apple” is “Mary apple eats” in Japanese. But then the ordering rules in English and Japanese are aligned within themselves in order to minimize dependency lengths on average for the language.
Q: How does the book challenge some longstanding ideas in the field of linguistics?
A: In 1957, a book called “Syntactic Structures” by Noam Chomsky was published. It is a wonderful book that provides mathematical approaches to describe what human language is. It is very influential in the field of linguistics, and for good reason.
One of the key components of the theory that Chomsky proposed was the “transformation,” such that words and phrases can move from a deep structure to the structure that we produce. He thought it was self-evident from examples in English that transformations must be part of a human language. But then this concept of transformations eventually led him to conclude that grammar is unlearnable, that it has to be built into the human mind.
In my view of grammar, there are no transformations. Instead, there are just two different versions of some words, or they can be underspecified for their grammar usage. The different usages may be related in meaning, and they can point to a similar meaning, but they have different dependency structures.
I think the advent of large language models suggests that language is learnable and that syntax isn’t as complicated as we used to think it was, because LLMs are successful at producing language. A large language model is almost the same as an adult speaker of a language in what it can produce. There are subtle ways in which they differ, but on the surface, they look the same in many ways, which suggests that these models do very well with learning language, even with human-like quantities of data.
I get pushback from some people who say, well, researchers can still use transformations to account for some phenomena. My reaction is: Unless you can show me that transformations are necessary, then I don’t think we need them.
Q: This book is open access. Why did you decide to publish it that way?
A: I am all for free knowledge for everyone. I am one of the editors of “Open Mind,” a journal established several years ago that is completely free and open access. I felt my book should be the same way, and MIT Press is a fantastic university press that is nonprofit and supportive of open-access publishing. It means I make less money, but it also means it can reach more people. For me, it is really about trying to get the information out there. I want more people to read it, to learn things. I think that’s how science is supposed to be.
MIT researchers have designed silicon structures that can perform calculations in an electronic device using excess heat instead of electricity. These tiny structures could someday enable more energy-efficient computation.
In this computing method, input data are encoded as a set of temperatures using the waste heat already present in a device. The flow and distribution of heat through a specially designed material forms the basis of the calculation. Then the output is represented by the power collected at the other end, which is thermostat at a fixed temperature.
The researchers used these structures to perform matrix vector multiplication with more than 99 percent accuracy. Matrix multiplication is the fundamental mathematical technique machine-learning models like LLMs utilize to process information and make predictions.
While the researchers still have to overcome many challenges to scale up this computing method for modern deep-learning models, the technique could be applied to detect heat sources and measure temperature changes in electronics without consuming extra energy. This would also eliminate the need for multiple temperature sensors that take up space on a chip.
“Most of the time, when you are performing computations in an electronic device, heat is the waste product. You often want to get rid of as much heat as you can. But here, we’ve taken the opposite approach by using heat as a form of information itself and showing that computing with heat is possible,” says Caio Silva, an undergraduate student in the Department of Physics and lead author of a paper on the new computing paradigm.
Silva is joined on the paper by senior author Giuseppe Romano, a research scientist at MIT’s Institute for Soldier Nanotechnologies. The research appears today in Physical Review Applied.
Turning up the heat
This work was enabled by a software system the researchers previously developed that allows them to automatically design a material that can conduct heat in a specific manner.
Using a technique called inverse design, this system flips the traditional engineering approach on its head. The researchers define the functionality they want first, then the system uses powerful algorithms to iteratively design the best geometry for the task.
They used this system to design complex silicon structures, each roughly the same size as a dust particle, that can perform computations using heat conduction. This is a form of analog computing, in which data are encoded and signals are processed using continuous values, rather than digital bits that are either 0s or 1s.
The researchers feed their software system the specifications of a matrix of numbers that represents a particular calculation. Using a grid, the system designs a set of rectangular silicon structures filled with tiny pores. The system continually adjusts each pixel in the grid until it arrives at the desired mathematical function.
Heat diffuses through the silicon in a way that performs the matrix multiplication, with the geometry of the structure encoding the coefficients.

“These structures are far too complicated for us to come up with just through our own intuition. We need to teach a computer to design them for us. That is what makes inverse design a very powerful technique,” Romano says.
But the researchers ran into a problem. Due to the laws of heat conduction, which impose that heat goes from hot to cold regions, these structures can only encode positive coefficients.
They overcame this problem by splitting the target matrix into its positive and negative components and representing them with separately optimized silicon structures that encode positive entries. Subtracting the outputs at a later stage allows them to compute negative matrix values.
They can also tune the thickness of the structures, which allows them to realize a greater variety of matrices. Thicker structures have greater heat conduction.
“Finding the right topology for a given matrix is challenging. We beat this problem by developing an optimization algorithm that ensures the topology being developed is as close as possible to the desired matrix without having any weird parts,” Silva explains.
Microelectronic applications
The researchers used simulations to test the structures on simple matrices with two or three columns. While simple, these small matrices are relevant for important applications, such as fusion sensing and diagnostics in microelectronics.
The structures performed computations with more than 99 percent accuracy in many cases.
However, there is still a long way to go before this technique could be used for large-scale applications such as deep learning, since millions of structures would need to be tiled together. As the matrices become more complicated, the structures become less accurate, especially when there is a large distance between the input and output terminals. In addition, the devices have limited bandwidth, which would need to be greatly expanded if they were to be used for deep learning.
But because the structures rely on excess heat, they could be directly applied for tasks like thermal management, as well as heat source or temperature gradient detection in microelectronics.
“This information is critical. Temperature gradients can cause thermal expansion and damage a circuit or even cause an entire device to fail. If we have a localized heat source where we don’t want a heat source, it means we have a problem. We could directly detect such heat sources with these structures, and we can just plug them in without needing any digital components,” Romano says.
Building on this proof-of-concept, the researchers want to design structures that can perform sequential operations, where the output of one structure becomes an input for the next. This is how machine-learning models perform computations. They also plan to develop programmable structures, enabling them to encode different matrices without starting from scratch with a new structure each time.
Study: The infant universe’s “primordial soup” was actually soupyMIT physicists observed the first clear evidence that quarks create a wake as they speed through quark-gluon plasma, confirming the plasma behaves like a liquid.In its first moments, the infant universe was a trillion-degree-hot soup of quarks and gluons. These elementary particles zinged around at light speed, creating a “quark-gluon plasma” that lasted for only a few millionths of a second. The primordial goo then quickly cooled, and its individual quarks and gluons fused to form the protons, neutrons, and other fundamental particles that exist today.
Physicists at CERN’s Large Hadron Collider in Switzerland are recreating quark-gluon plasma (QGP) to better understand the universe’s starting ingredients. By smashing together heavy ions at close to light speeds, scientists can briefly dislodge quarks and gluons to create and study the same material that existed during the first microseconds of the early universe.
Now, a team at CERN led by MIT physicists has observed clear signs that quarks create wakes as they speed through the plasma, similar to a duck trailing ripples through water. The findings are the first direct evidence that quark-gluon plasma reacts to speeding particles as a single fluid, sloshing and splashing in response, rather than scattering randomly like individual particles.
“It has been a long debate in our field, on whether the plasma should respond to a quark,” says Yen-Jie Lee, professor of physics at MIT. “Now we see the plasma is incredibly dense, such that it is able to slow down a quark, and produces splashes and swirls like a liquid. So quark-gluon plasma really is a primordial soup.”
To see a quark’s wake effects, Lee and his colleagues developed a new technique that they report in the study. They plan to apply the approach to more particle-collision data to zero in on other quark wakes. Measuring the size, speed, and extent of these wakes, and how long it takes for them to ebb and dissipate, can give scientists an idea of the properties of the plasma itself, and how quark-gluon plasma might have behaved in the universe’s first microseconds.
“Studying how quark wakes bounce back and forth will give us new insights on the quark-gluon plasma’s properties,” Lee says. “With this experiment, we are taking a snapshot of this primordial quark soup.”
The study’s co-authors are members of the CMS Collaboration — a team of particle physicists from around the world who work together to carry out and analyze data from the Compact Muon Solenoid (CMS) experiment, which is one of the general-purpose particle detectors at CERN’s Large Hadron Collider. The CMS experiment was used to detect signs of quark wake effects for this study. The open-access study appears in the journal Physics Letters B.
Quark shadows
Quark-gluon plasma is the first liquid to have ever existed in the universe. It is also the hottest liquid ever, as scientists estimate that during its brief existence, the QGP was around a few trillion degrees Celsius. This boiling stew is also thought to have been a near-“perfect” liquid, meaning that the individual quarks and gluons in the plasma flowed together as a smooth, frictionless fluid.
This picture of the QGP is based on many independent experiments and theoretical models. One such model, derived by Krishna Rajagopal, the William A. M. Burden Professor of Physics at MIT, and his collaborators, predicts that the quark-gluon plasma should respond like a fluid to any particles speeding through it. His theory, known as the hybrid model, suggests that when a jet of quarks is zinging through the QGP, it should produce a wake behind it, inducing the plasma to ripple and splash in response.
Physicists have looked for such wake effects in experiments at the Large Hadron Collider and other high-energy particle accelerators. These experiments whip up heavy ions such as lead, to close to the speed of light, at which point they can collide and produce a short-lived droplet of primordial soup, typically lasting for less than a quadrillionth of a second. Scientists essentially take a snapshot of the moment to try and identify characteristics of the QGP.
To identify quark wakes, physicists have looked for pairs of quarks and “antiquarks” — particles that are identical to their quark counterparts, except that certain properties are equal in magnitude but opposite in sign. For instance, when a quark is speeding through plasma, there is likely an antiquark that is traveling at exactly the same speed, but in the opposite direction.
For this reason, physicists have looked for quark/antiquark pairs in the QGP produced in heavy-ion collisions, assuming that the particles might produce identical, detectable wakes through the plasma.
“When you have two quarks produced, the problem is that, when the two quarks go in opposite directions, the one quark overshadows the wake of the second quark,” Lee says.
He and his colleagues realized that looking for the wake of the first quark would be easier if there were no second quark obscuring its effects.
“We have figured out a new technique that allows us to see the effects of a single quark in the QGP, through a different pair of particles,” Lee says.
A wake tag
Rather than search for pairs of quarks and antiquarks in the aftermath of lead ion collisions, Lee’s team instead looked for events with only one quark moving through the plasma, essentially back-to-back with a “Z boson.” A Z boson is a neutral, electrically weak elementary particle that has virtually no effect on the surrounding environment. However, because they exist at a very specific energy, Z bosons are relatively straightforward to detect.
“In this soup of quark-gluon plasma, there are numerous quarks and gluons passing by and colliding with each other,” Lee explains. “Sometimes when we are lucky, one of these collisions creates a Z boson and a quark, with high momentum.”
In such a collision, the two particles should hit each other and fly off in exact opposite directions. While the quark could leave a wake, the Z boson should have no effect on the surrounding plasma. Whatever ripples are observed in the droplet of primordial soup would have been made entirely by the single quark zipping through it.
The team, in collaboration with Professor Yi Chen’s group at Vanderbilt University, reasoned that they could use Z bosons as a “tag” to locate and trace the wake effects of single quarks. For their new study, the researchers looked through data from the Large Hadron Collider’s heavy-ion collision experiments. From 13 billion collisions, they identified about 2,000 events that produced a Z boson. For each of these events, they mapped the energies throughout the short-lived quark-gluon plasma, and consistently observed a fluid-like pattern of splashes in swirls — a wake effect — in the opposite direction of the Z bosons, which the team could directly attribute to the effect of single quarks zooming through the plasma.
What’s more, the physicists found that the wake effects they observed in the data were consistent with what Rajagopal’s hybrid model predicts. In other words, quark-gluon plasma does in fact flow and ripple like a fluid when particles speed through it.
“This is something that many of us have argued must be there for a good many years, and that many experiments have looked for,” says Rajagopal, who was not directly involved with the new study.
“What Yen-Jie and CMS have done is to devise and execute a measurement that has brought them and us the first clean, clear, unambiguous, evidence for this foundational phenomenon,” says Daniel Pablos, professor of physics at Oviedo University in Spain and a collaborator of Rajagopal’s who was not involved in the current study.
“We’ve gained the first direct evidence that the quark indeed drags more plasma with it as it travels,” Lee adds. “This will enable us to study the properties and behavior of this exotic fluid in unprecedented detail.”
This work was supported, in part, by the U.S. Department of Energy.
Cancer’s secret safety netResearchers uncover a hidden mechanism that allows cancer to develop aggressive mutations.Researchers in Class of 1942 Professor of Chemistry Matthew D. Shoulders’ lab have uncovered a sinister hidden mechanism that can allow cancer cells to survive (and, in some cases, thrive) even when hit with powerful drugs. The secret lies in a cellular “safety net” that gives cancer the freedom to develop aggressive mutations.
This fascinating intersection between molecular biology and evolutionary dynamics, published Jan. 22 on the cover of Molecular Cell, focuses on the most famous anti-cancer gene in the human body, TP53 (tumor protein 53, known as p53), and suggests that cancer cells don’t just mutate by accident — they create a specialized environment that makes dangerous mutations possible.
The guardian under attack
Tasked with the job of stopping damaged cells from dividing, the p53 protein has been known for decades as the “guardian of the genome” and is the most mutated gene in cancer. Some of the most perilous of these mutations are known as “dominant-negative” variants. Not only do they stop working, but they actually prevent any healthy p53 in the cell from doing its job, essentially disarming the body’s primary defense system.
To function, p53 and most other proteins must fold into specific 3D shapes, much like precise cellular origami. Typically, if a mutation occurs that ruins this shape, the protein becomes a tangled mess, and the cell destroys it.
A specialized network of proteins, called cellular chaperones, help proteins fold into their correct shape, collectively known as the proteostasis network.
“Many chaperone networks are known to be upregulated in cancer cells, for reasons that are not totally clear,” says Stephanie Halim, a graduate student in the Shoulders Group and co-first author of the study, along with Rebecca Sebastian PhD ’22. “We hypothesized that increasing the activities of these helpful protein folding networks can allow cancer cells to tolerate more mutations than a regular cell.”
The research team investigated a “helper” system in the cell called the proteostasis network. This network involves many proteins known as chaperones that help other proteins fold correctly. A master regulator called Heat Shock Factor 1 (HSF1) controls the composition of the proteostasis network, with HSF1 activity upregulating the network to create supportive protein folding environments in response to stress. In healthy cells, HSF1 stays dormant until heat or toxins appear. In cancer, HSF1 is often permanently in action mode.
To see how this works in real-time, the team created a specialized cancer cell line that let them chemically “turn up” the activity of HSF1 on demand. They then used a cutting-edge technique to express every possible singly mutated version of a p53 protein — testing thousands of different genetic “typos” at once.
The results were clear: When HSF1 was amplified, the cancer cells became much better at handling “bad” mutations. Normally, these specific mutations are so physically disruptive that they would cause the protein to collapse and fail. However, with HSF1 providing extra folding help, these unstable, cancer-driving proteins were able to stay intact and keep the cancer growing.
“These findings show that chaperone networks can reshape the fundamental mutational tolerance of the most mutated gene in cancer, linking proteostasis network activity directly to cancer development,” said Halim. “This work also puts us one step closer to understanding how tinkering with cellular protein folding pathways can help with cancer treatment.”
Unravelling cancer’s safety net
The study revealed that HSF1 activity specifically protects normally disruptive amino acid substitutions located deep inside the protein’s core — the most sensitive areas. Without this extra folding help, these substitutions would likely cause degradation of these proteins. With it, the cancer cell can keep these broken proteins around to help it grow.
This discovery helps explain why cancer is so resilient, and why previous attempts to treat cancer by blocking chaperone proteins (like HSP90, an abundant cellular chaperone) have been so complex. By understanding how cancer “buffers” its own bad mutations, doctors may one day be able to break that safety net, forcing the cancer’s own mutations to become its downfall.
The research was conducted in collaboration with the labs of professors Yu-Shan Lin of Tufts University; Francisco J. Sánchez-Rivera of the MIT Department of Biology; William C. Hahn, institute member of the Broad Institute of MIT and Harvard and professor of medicine in the Department of Medical Oncology at the Dana-Farber Cancer Institute and Harvard Medical School; and Marc L. Mendillo of Northwestern University.
Richard Hynes, a pioneer in the biology of cellular adhesion, dies at 81Professor, mentor, and leader at MIT for more than 50 years shaped fundamental understandings of cell adhesion, the extracellular matrix, and molecular mechanisms of metastasis.MIT Professor Emeritus Richard O. Hynes PhD ’71, a cancer biologist whose discoveries reshaped modern understandings of how cells interact with each other and their environment, passed away on Jan. 6. He was 81.
Hynes is best known for his discovery of integrins, a family of cell-surface receptors essential to cell–cell and cell–matrix adhesion. He played a critical role in establishing the field of cell adhesion biology, and his continuing research revealed mechanisms central to embryonic development, tissue integrity, and diseases including cancer, fibrosis, thrombosis, and immune disorders.
Hynes was the Daniel K. Ludwig Professor for Cancer Research, Emeritus, an emeritus professor of biology, and a member of the Koch Institute for Integrated Cancer Research at MIT and the Broad Institute of MIT and Harvard. During his more than 50 years on the faculty at MIT, he was deeply respected for his academic leadership at the Institute and internationally, as well as his intellectual rigor and contributions as an educator and mentor.
“Richard had an enormous impact in his career. He was a visionary leader of the MIT Cancer Center, what is now the Koch Institute, during a time when the progress in understanding cancer was just starting to be translated into new therapies,” reflects Matthew Vander Heiden, director of the Koch Institute and the Lester Wolfe (1919) Professor of Molecular Biology. “The research from his laboratory launched an entirely new field by defining the molecules that mediate interactions between cells and between cells and their environment. This laid the groundwork for better understanding the immune system and metastasis.”
Pond skipper
Born in Kenya, Hynes grew up during the 1950s in Liverpool, in the United Kingdom. While he sometimes recounted stories of being schoolmates with two of the Beatles, and in the same Boy Scouts troop as Paul McCartney, his academic interests were quite different, and he specialized in the sciences at a young age. Both of his parents were scientists: His father was a freshwater ecologist, and his mother a physics teacher. Hynes and all three of his siblings followed their parents into scientific fields.
"We talked science at home, and if we asked questions, we got questions back, not answers. So that conditioned me into being a scientist, for sure," Hynes said of his youth.
He described his time as an undergraduate and master’s student at Cambridge University during the 1960s as “just fantastic,” noting that it was shortly after two 1962 Nobel Prizes were awarded to Cambridge researchers — one to Francis Crick and James Watson for the structure of DNA, the other to John Kendrew and Max Perutz for the structures of proteins — and Cambridge was “the place to be” to study biology.
Newly married, Hynes and his wife traded Cambridge, U.K. for Cambridge, Massachusetts, so that he could conduct doctoral work at MIT under the direction of Paul Gross. He tried (and by his own assessment, failed) to differentiate maternal messages among the three germ layers of sea urchin embryos. However, he did make early successful attempts to isolate the globular protein tubulin, a building block for essential cellular structures, from sea urchins.
Inspired by a course he had taken with Watson in the United States, Hynes began work during his postdoc at the Institute of Cancer Research in the U.K. on the early steps of oncogenic transformation and the role of cell migration and adhesion; it was here that he made his earliest discovery and characterizations of the fibronectin protein.
Recruited back to MIT by Salvador Luria, founding director of the MIT Center for Cancer Research, whom he had met during a summer at Woods Hole Oceanographic Institute on Cape Cod, Hynes returned to the Institute in 1975 as a founding faculty member of the center and an assistant professor in the Department of Biology.
Big questions about tiny cells
To his own research, Hynes brought the same spirit of inquiry that had characterized his upbringing, asking fundamental questions: How do cells interact with each other? How do they stick together to form tissues?
His research focused on proteins that allow cells to adhere to each other and to the extracellular matrix — a mesh-like network that surrounds cells, providing structural support, as well as biochemical and mechanical cues from the local microenvironment. These proteins include integrins, a type of cell surface receptor, and fibronectins, a family of extracellular adhesive proteins. Integrins are the major adhesion receptors connecting the extracellular matrix to the intracellular cytoskeleton, or main architectural support within the cell.
Hynes began his career as a developmental biologist, studying how cells move to the correct locations during embryonic development. During this stage of development, proper modulation of cell adhesion is critical for cells to move to the correct locations in the embryo.
Hynes’ work also revealed that dysregulation of cell-to-matrix contact plays an important role in cancer cells’ ability to detach from a tumor and spread to other parts of the body, key steps in metastasis.
As a postdoc, Hynes had begun studying the differences in the surface landscapes of healthy cells and tumor cells. It was this work that led to the discovery of fibronectin, which is often lost when cells become cancerous.
He and others found that fibronectin is an important part of the extracellular matrix. When fibronectin is lost, cancer cells can more easily free themselves from their original location and metastasize to other sites in the body. By studying how fibronectin normally interacts with cells, Hynes and others discovered a family of cell surface receptors known as integrins, which function as important physical links with the extracellular matrix. In humans, 24 integrin proteins have been identified. These proteins help give tissues their structure, enable blood to clot, and are essential for embryonic development.
“Richard’s discoveries, along with others’, of cell surface integrins led to the development of a number of life-altering treatments. Among these are treatment of autoimmune diseases such as multiple sclerosis,” notes longtime colleague Phillip Sharp, MIT Institute professor emeritus.
As research technologies advanced, including proteomic and extracellular matrix isolation methods developed directly in Hynes’ laboratory, he and his group were able to uncover increasingly detailed information about specific cell adhesion proteins, the biological mechanisms by which they operate, and the roles they play in normal biology and disease.
In cancer, their work helped to uncover how cell adhesion (and the loss thereof) and the extracellular matrix contribute not only to fundamental early steps in the metastatic process, but also tumor progression, therapeutic response, and patient prognosis. This included studies that mapped matrix protein signatures associated with cancer and non-cancer cells and tissues, followed by investigations into how differentially expressed matrix proteins can promote or suppress cancer progression.
Hynes and his colleagues also demonstrated how extracellular matrix composition can influence immunotherapy, such as the importance of a family of cell adhesion proteins called selectins for recruiting natural killer cells to tumors. Further, Hynes revealed links between fibronectin, integrins, and other matrix proteins with tumor angiogenesis, or blood vessel development, and also showed how interaction with platelets can stimulate tumor cells to remodel the extracellular matrix to support invasion and metastasis. In pursuing these insights into the oncogenic mechanisms of matrix proteins, Hynes and members of his laboratory have identified useful diagnostic and prognostic biomarkers, as well as therapeutic targets.
Along the way, Hynes shaped not only the research field, but also the careers of generations of trainees.
“There was much to emulate in Richard’s gentle, patient, and generous approach to mentorship. He centered the goals and interests of his trainees, fostered an inclusive and intellectually rigorous environment, and cared deeply about the well-being of his lab members. Richard was a role model for integrity in both personal and professional interactions and set high expectations for intellectual excellence,” recalls Noor Jailkhani, a former Hynes Lab postdoc.
Jailkhani is CEO and co-founder, with Hynes, of Matrisome Bio, a biotech company developing first-in-class targeted therapies for cancer and fibrosis by leveraging the extracellular matrix. “The impact of his long and distinguished scientific career was magnified through the generations of trainees he mentored, whose influence spans academia and the biotechnology industry worldwide. I believe that his dedication to mentorship stands among his most far-reaching and enduring contributions,” she says.
A guiding light
Widely sought for his guidance, Hynes served in a number of key roles at MIT and in the broader scientific community. As head of MIT’s Department of Biology from 1989 to 1991, then a decade as director of the MIT Center for Cancer Research, his leadership has helped shape the Institute’s programs in both areas.
“Words can’t capture what a fabulous human being Richard was. I left every interaction with him with new insights and the warm glow that comes from a good conversation,” says Amy Keating, the Jay A. Stein (1968) Professor, professor of biology and biological engineering, and head of the Department of Biology. “Richard was happy to share stories, perspectives, and advice, always with a twinkle in his eye that conveyed his infinite interest in and delight with science, scientists, and life itself. The calm support that he offered me, during my years as department head, meant a lot and helped me do my job with confidence.”
Hynes served as director of the MIT Center for Cancer Research from 1991 until 2001, positioning the center’s distinguished cancer biology program for expansion into its current, interdisciplinary research model as MIT’s Koch Institute for Integrative Cancer Research. “He recruited and strongly supported Tyler Jacks to the faculty, who subsequently became director and headed efforts to establish the Koch Institute,” recalls Sharp.
Jacks, a David H. Koch (1962) Professor of Biology and founding director of the Koch Institute, remembers Hynes as a thoughtful, caring, and highly effective leader in the Center for Cancer Research, or CCR, and in the Department of Biology. “I was fortunate to be able to lean on him when I took over as CCR director. He encouraged me to drop in — unannounced — with questions and concerns, which I did regularly. I learned a great deal from Richard, at every level,” he says.
Hynes’ leadership and recognition extended well beyond MIT to national and international contexts, helping to shape policy and strengthen connections between MIT researchers and the wider field. He served as a scientific governor of the Wellcome Trust, a global health research and advocacy foundation based in the United Kingdom, and co-chaired U.S. National Academy committees establishing guidelines for stem cell and genome editing research.
“Richard was an esteemed scientist, a stimulating colleague, a beloved mentor, a role model, and to me a partner in many endeavors both within and beyond MIT,” notes H. Robert Horvitz, a David H. Koch (1962) Professor of Biology. He was a wonderful human being, and a good friend. I am sad beyond words at his passing.”
Awarded Howard Hughes medical investigator status in 1988, Hynes’ research and leadership have since been recognized with a number of other notable honors. Most recently, he received the 2022 Albert Lasker Basic Medical Research Award, which he shared with Erkki Ruoslahti of Sanford Burnham Prebys and Timothy Springer of Harvard University, for his discovery of integrins and pioneering work in cell adhesion.
His other awards include the Canada Gairdner International Award, the Distinguished Investigator Award from the International Society for Matrix Biology, the Robert and Claire Pasarow Medical Research Award, the E.B. Wilson Medal from the American Society for Cell Biology, the David Rall Medal from the National Academy of Medicine and the Paget-Ewing Award from the Metastasis Research Society. Hynes was a member of the National Academy of Sciences, the National Academy of Medicine, the Royal Society of London, the American Association for the Advancement of Science, and the American Academy of Arts and Sciences.
Easily recognized by a commanding stature that belied his soft-spoken nature, Hynes was known around MIT’s campus not only for his acuity, integrity, and wise counsel, but also for his community spirit and service. From serving food at community socials to moderating events and meetings or recognizing the success of colleagues and trainees, his willingness to help spanned roles of every size.
“Richard was a phenomenal friend and colleague. He approached complex problems with a thoughtfulness and clarity that few can achieve,” notes Vander Heiden. “He was also so generous in his willingness to provide help and advice, and did so with a genuine kindness that was appreciated by everyone.”
Hynes is survived by his wife Fleur, their sons Hugh and Colin and their partners, and four grandchildren.
Biology-based brain model matches animals in learning, enables new discoveryNew “biomimetic” model of brain circuits and function at multiple scales produced naturalistic dynamics and learning, and even identified curious behavior by some neurons.A new computational model of the brain based closely on its biology and physiology not only learned a simple visual category learning task exactly as well as lab animals, but even enabled the discovery of counterintuitive activity by a group of neurons that researchers working with animals to perform the same task had not noticed in their data before, says a team of scientists at Dartmouth College, MIT, and the State University of New York at Stony Brook.
Notably, the model produced these achievements without ever being trained on any data from animal experiments. Instead, it was built from scratch to faithfully represent how neurons connect into circuits and then communicate electrically and chemically across broader brain regions to produce cognition and behavior. Then, when the research team asked the model to perform the same task that they had previously performed with the animals (looking at patterns of dots and deciding which of two broader categories they fit), it produced highly similar neural activity and behavioral results, acquiring the skill with almost exactly the same erratic progress.
“It’s just producing new simulated plots of brain activity that then only afterward are being compared to the lab animals. The fact that they match up as strikingly as they do is kind of shocking,” says Richard Granger, a professor of psychological and brain sciences at Dartmouth and senior author of a new study in Nature Communications that describes the model.
A goal in making the model, and newer iterations developed since the paper was written, is not only to offer insight into how the brain works, but also how it might work differently in disease and what interventions could correct those aberrations, adds co-author Earl K. Miller, Picower Professor in The Picower Institute for Learning and Memory at MIT. Miller, Granger, and other members of the research team have founded the company Neuroblox.ai to develop the models’ biotech applications. Co-author Lilianne R. Mujica-Parodi, a biomedical engineering professor at Stony Brook who is lead principal investigator for the Neuroblox Project, is CEO of the company.
“The idea is to make a platform for biomimetic modeling of the brain so you can have a more efficient way of discovering, developing, and improving neurotherapeutics. Drug development and efficacy testing, for example, can happen earlier in the process, on our platform, before the risk and expense of clinical trials,” says Miller, who is also a faculty member of MIT’s Department of Brain and Cognitive Sciences.
Making a biomimetic model
Dartmouth postdoc Anand Pathak created the model, which differs from many others in that it incorporates both small details, such as how individual pairs of neurons connect with each other, and large-scale architecture, including how information processing across regions is affected by neuromodulatory chemicals such as acetylcholine. Pathak and the team iterated their designs to ensure they obeyed various constraints observed in real brains, such as how neurons become synchronized by broader rhythms. Many other models focus only on the small or big scales, but not both, he says.
“We didn’t want to lose the tree, and we didn’t want to lose the forest,” Pathak says.
The metaphorical “trees,” called “primitives” in the study, are small circuits of a few neurons each that connect based on electrical and chemical principles of real cells to perform fundamental computational functions. For example, within the model’s version of the brain’s cortex, one primitive design has excitatory neurons that receive input from the visual system via synapse connections affected by the neurotransmitter glutamate. Those excitatory neurons then densely connect with inhibitory neurons in a competition to signal them to shut down the other excitatory neurons — a “winner-take-all” architecture found in real brains that regulates information processing.
At a larger scale, the model encompasses four brain regions needed for basic learning and memory tasks: a cortex, a brainstem, a striatum, and a “tonically active neuron” (TAN) structure that can inject a little “noise” into the system via bursts of aceytlcholine. For instance, as the model engaged in the task of categorizing the presented patterns of dots, the TAN at first ensured some variability in how the model acted on the visual input so that the model could learn by exploring varied actions and their outcomes. As the model continued to learn, cortex and striatum circuits strengthened connections that suppressed the TAN, enabling the model to act on what it was learning with increasing consistency.
As the model engaged in the learning task, real-world properties emerged, including a dynamic that Miller has commonly observed in his research with animals. As learning progressed, the cortex and striatum became more synchronized in the “beta” frequency band of brain rhythms, and this increased synchrony correlated with times when the model (and the animals) made the correct category judgement about what they were seeing.
Revealing “incongruent” neurons
But the model also presented the researchers with a group of neurons — about 20 percent — whose activity appeared highly predictive of error. When these so-called “incongruent” neurons influenced circuits, the model would make the wrong category judgement. At first, Granger says, the team figured it was a quirk of the model. But then they looked at the real-brain data Miller’s lab accumulated when animals performed the same task.
“Only then did we go back to the data we already had, sure that this couldn’t be in there because somebody would have said something about it, but it was in there, and it just had never been noticed or analyzed,” he says.
Miller says these counterintuitive cells might serve a purpose: it’s all well and good to learn the rules of a task, but what if the rules change? Trying out alternatives from time to time can enable a brain to stumble upon a newly emerging set of conditions. Indeed, a separate Picower Institute lab recently published evidence that humans and other animals do this sometimes.
While the model described in the new paper performed beyond the team’s expectations, Granger says, the team has been expanding it to make it sophisticated enough to handle a greater variety of tasks and circumstances. For instance, they have added more regions and new neuromodulatory chemicals. They’ve also begun to test how interventions such as drugs affect its dynamics.
In addition to Granger, Miller, Pathak and Mujica-Parodi, the paper’s other authors are Scott Brincat, Haris Organtzidis, Helmut Strey, Sageanne Senneff, and Evan Antzoulatos.
The Baszucki Brain Research Fund, United States, the Office of Naval Research, and the Freedom Together Foundation provided support for the research.
Our thoughts are specified by our knowledge and plans, yet our cognition can also be fast and flexible in handling new information. How does the well-controlled and yet highly nimble nature of cognition emerge from the brain’s anatomy of billions of neurons and circuits?
A study by researchers in The Picower Institute for Learning and Memory at MIT provides new evidence from tests in animals that the answer might be found within a theory called “spatial computing.”
First proposed in 2023 by Picower Professor Earl K. Miller and colleagues Mikael Lundqvist and Pawel Herman, spatial computing theory explains how neurons in the prefrontal cortex can be organized on the fly into a functional group capable of carrying out the information processing required by a cognitive task. Moreover, it allows for neurons to participate in multiple such groups, as years of experiments have shown that many prefrontal neurons can indeed participate in multiple tasks at once.
The basic idea of the theory is that the brain recruits and organizes ad hoc “task forces” of neurons by using “alpha” and “beta” frequency brain waves (about 10-30Hz) to apply control signals to physical patches of the prefrontal cortex. Rather than having to rewire themselves into new physical circuits every time a new task must be done, the neurons in the patch instead process information by following the patterns of excitation and inhibition imposed by the waves.
Think of the alpha and beta frequency waves as stencils that shape when and where in the prefrontal cortex groups of neurons can take in or express information from the senses, Miller says. In that way, the waves represent the rules of the task and can organize how the neurons electrically “spike” to process the information content needed for the task.
“Cognition is all about large-scale neural self-organization,” says Miller, senior author of the paper in Current Biology and a faculty member in MIT’s Department of Brain and Cognitive Sciences. “Spatial computing explains how the brain does that.”
Testing five predictions
A theory is just an idea. In the study, lead author Zhen Chen and other current and former members of Miller’s lab put spatial computing to the test by examining whether five predictions it makes about neural activity and brain wave patterns were actually evident in measurements made in the prefrontal cortex of animals as they engaged in two working memory and one categorization tasks. Across the tasks there were distinct pieces of sensory information to process (e.g., “A blue square appeared on the screen followed by a green triangle”) and rules to follow (e.g., “When new shapes appear on the screen, do they match the shapes I saw before and appear in the same order?”)
The first two predictions were that alpha and beta waves should represent task controls and rules, while the spiking activity of neurons should represent the sensory inputs. When the researchers analyzed the brain wave and spiking readings gathered by the four electrode arrays implanted in the cortex, they found that indeed these predictions were true. Neural spikes, but not the alpha/beta waves, carried sensory information. While both spikes and the alpha/beta waves carried task information, it was strongest in the waves, and it peaked at times relevant to when rules were needed to carry out the tasks.
Notably, in the categorization task, the researchers purposely varied the level of abstraction to make categorization more or less cognitively difficult. The researchers saw that the greater the difficulty, the stronger the alpha/beta wave power was, further showing that it carries task rules.
The next two predictions were that alpha/beta would be spatially organized, and that when and where it was strong, the sensory information represented by spiking would be suppressed, but where and when it was weak, spiking would increase. These predictions also held true in the data. Under the electrodes, Chen, Miller, and the team could see distinct spatial patterns of higher or lower wave power, and where power was high, the sensory information in spiking was low, and vice versa.
Finally, if spatial computing is valid, the researchers predicted, then trial by trial, alpha/beta power and timing should accurately correlate with the animals’ performance. Sure enough, there were significant differences in the signals on trials where the animals performed the tasks correctly versus when they made mistakes. In particular, the measurements predicted mistakes due to messing up task rules versus sensory information. For instance, alpha/beta discrepancies pertained to the order in which stimuli appeared (first square then triangle) rather than the identity of the individual stimuli (square or triangle).
Compatible with findings in humans
By conducting this study with animals, the researchers were able to make direct measurements of individual neural spikes as well as brain waves, and in the paper, they note that other studies in humans report some similar findings. For instance, studies using noninvasive EEG and MEG brain wave readings show that humans use alpha oscillations to inhibit activity in task-irrelevant areas under top-down control, and that alpha oscillations appear to govern task-related activity in the prefrontal cortex.
While Miller says he finds the results of the new study, and their intersection with human studies, to be encouraging, he acknowledges that more evidence is still needed. For instance, his lab has shown that brain waves are typically not still (like a jump rope), but travel across areas of the brain. Spatial computing should account for that, he says.
In addition to Chen and Miller, the paper’s other authors are Scott Brincat, Mikael Lundqvist, Roman Loonis, and Melissa Warden.
The U.S. Office of Naval Research, The Freedom Together Foundation, and The Picower Institute for Learning and Memory funded the study.
Over the years, passing spacecraft have observed mystifying weather patterns at the poles of Jupiter and Saturn. The two planets host very different types of polar vortices, which are huge atmospheric whirlpools that rotate over a planet’s polar region. On Saturn, a single massive polar vortex appears to cap the north pole in a curiously hexagonal shape, while on Jupiter, a central polar vortex is surrounded by eight smaller vortices, like a pan of swirling cinnamon rolls.
Given that both planets are similar in many ways — they are roughly the same size and made from the same gaseous elements — the stark difference in their polar weather patterns has been a longstanding mystery.
Now, MIT scientists have identified a possible explanation for how the two different systems may have evolved. Their findings could help scientists understand not only the planets’ surface weather patterns, but also what might lie beneath the clouds, deep within their interiors.
In a study appearing this week in the Proceedings of the National Academy of Sciences, the team simulates various ways in which well-organized vortex patterns may form out of random stimulations on a gas giant. A gas giant is a large planet that is made mostly of gaseous elements, such as Jupiter and Saturn. Among a wide range of plausible planetary configurations, the team found that, in some cases, the currents coalesced into a single large vortex, similar to Saturn’s pattern, whereas other simulations produced multiple large circulations, akin to Jupiter’s vortices.
After comparing simulations, the team found that vortex patterns, and whether a planet develops one or multiple polar vortices, comes down to one main property: the “softness” of a vortex’s base, which is related to the interior composition. The scientists liken an individual vortex to a whirling cylinder spinning through a planet’s many atmospheric layers. When the base of this swirling cylinder is made of softer, lighter materials, any vortex that evolves can only grow so large. The final pattern can then allow for multiple smaller vortices, similar to those on Jupiter. In contrast, if a vortex’s base is made of harder, denser stuff, it can grow much larger and subsequently engulf other vortices to form one single, massive vortex, akin to the monster cyclone on Saturn.
“Our study shows that, depending on the interior properties and the softness of the bottom of the vortex, this will influence the kind of fluid pattern you observe at the surface,” says study author Wanying Kang, assistant professor in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “I don’t think anyone’s made this connection between the surface fluid pattern and the interior properties of these planets. One possible scenario could be that Saturn has a harder bottom than Jupiter.”
The study’s first author is MIT graduate student Jiaru Shi.
Spinning up
Kang and Shi’s new work was inspired by images of Jupiter and Saturn that have been taken by the Juno and Cassini missions. NASA’s Juno spacecraft has been orbiting around Jupiter since 2016, and has captured stunning images of the planet’s north pole and its multiple swirling vortices. From these images, scientists have estimated that each of Jupiter’s vortices is immense, spanning about 3,000 miles across — almost half as wide as the Earth itself.
The Cassini spacecraft, prior to intentionally burning up in Saturn’s atmosphere in 2017, orbited the ringed planet for 13 years. Its observations of Saturn’s north pole recorded a single, hexagonal-shaped polar vortex, about 18,000 miles wide.
“People have spent a lot of time deciphering the differences between Jupiter and Saturn,” Shi says. “The planets are about the same size and are both made mostly of hydrogen and helium. It’s unclear why their polar vortices are so different.”
Shi and Kang set out to identify a physical mechanism that would explain why one planet might evolve a single vortex, while the other hosts multiple vortices. To do so, they worked with a two-dimensional model of surface fluid dynamics. While a polar vortex is three-dimensional in nature, the team reasoned that they could accurately represent vortex evolution in two dimensions, as the fast rotation of Jupiter and Saturn enforces uniform motion along the rotating axis.
“In a fast-rotating system, fluid motion tends to be uniform along the rotating axis,” Kang explains. “So, we were motivated by this idea that we can reduce a 3D dynamical problem to a 2D problem because the fluid pattern does not change in 3D. This makes the problem hundreds of times faster and cheaper to simulate and study.”
Getting to the bottom
Following this reasoning, the team developed a two-dimensional model of vortex evolution on a gas giant, based on an existing equation that describes how swirling fluid evolves over time.
“This equation has been used in many contexts, including to model midlatitude cyclones on Earth,” Kang says. “We adapted the equation to the polar regions of Jupiter and Saturn.”
The team applied their two-dimensional model to simulate how fluid would evolve over time on a gas giant under different scenarios. In each scenario, the team varied the planet’s size, its rate of rotation, its internal heating, and the softness or hardness of the rotating fluid, among other parameters. They then set a random “noise” condition, in which fluid initially flowed in random patterns across the planet’s surface. Finally, they observed how the fluid evolved over time given the scenario’s specific conditions.
Over multiple different simulations, they observed that some scenarios evolved to form a single large polar vortex, like Saturn, whereas others formed multiple smaller vortices, like Jupiter. After analyzing the combinations of parameters and variables in each scenario and how they related to the final outcome, they landed on a single mechanism to explain whether a single or multiple vortices evolve: As random fluid motions start to coalesce into individual vortices, the size to which a vortex can grow is limited by how soft the bottom of the vortex is. The softer, or lighter the gas is that is rotating at the bottom of a vortex, the smaller the vortex is in the end, allowing for multiple smaller-scale vortices to coexist at a planet’s pole, similar to those on Jupiter.

Conversely, the harder or denser a vortex bottom is, the larger the system can grow, to a size where eventually it can follow the planet’s curvature as a single, planetary-scale vortex, like the one on Saturn.
If this mechanism is indeed what is at play on both gas giants, it would suggest that Jupiter could be made of softer, lighter material, while Saturn may harbor heavier stuff in its interior.
“What we see from the surface, the fluid pattern on Jupiter and Saturn, may tell us something about the interior, like how soft the bottom is,” Shi says. “And that is important because maybe beneath Saturn’s surface, the interior is more metal-enriched and has more condensable material which allows it to provide stronger stratification than Jupiter. ”
"Because Jupiter and Saturn are otherwise so similar, their different polar weather has been a puzzle,” says Yohai Kaspi, a professor of geophysical fluid dynamics at the Weizmann Institute of Science, and a member of the Juno mission’s science team, who was not involved in the new study. “The work by Shi and Kang reveals a surprising link between these differences and the planets’ deep interior ‘softness’, offering a new way to map the key internal properties that shape their atmospheres."
This research was supported, in part, by a Mathworks Fellowship and endowed funding from MIT’s Department of Earth, Atmospheric and Planetary Sciences.
Demystifying college for enlisted veterans and service membersFor nearly a decade, the MIT Warrior-Scholar Project STEM boot camp has helped enlisted members of the military prepare for higher education.“I went into the military right after high school, mostly because I didn’t really see the value of academics,” says Air Force veteran and MIT sophomore Justin Cole.
His perspective on education shifted, however, after he experienced several natural disasters during his nine years of service. As a satellite systems operator in Colorado, Cole volunteered in the aftermath of the 2013 Black Forest fire, the state’s most destructive fire at the time. And in 2018, while he was leading a team in Okinawa conducting signal-monitoring work on communications satellites, two Category 5 typhoons barreled through the area within 26 days.
“I realized, this climate stuff is really a prerequisite to national security objectives in almost every sense, so I knew that school was going to be the thing that would help prepare me to make a difference,” he says. In 2023, after leaving the Air Force to work for climate-focused nonprofits and take engineering courses, Cole participated in an intense, weeklong STEM boot camp at MIT. “It definitely reaffirmed that I wanted to continue down the path of at least getting a bachelor’s, and it also inspired me to apply to MIT,” he says. He transferred in 2024 and is majoring in climate system science and engineering.
“It’s a lot like the MIT experience”
MIT runs the boot camp every summer as part of the nonprofit Warrior-Scholar Project (WSP), which started at Yale University in 2012. WSP offers a range of programming designed to help enlisted veterans and service members transition from the military to higher education. The academic boot camp program, which aims to simulate a week of undergraduate life, is offered at 19 schools nationwide in three areas: business, college readiness, and STEM.
MIT joined WSP in 2017 as one of the first three campuses to offer the STEM boot camp. “It was definitely rigorous,” Cole recalls, “not getting tons of sleep, grinding psets at night with friends … it’s a lot like the MIT experience.” In addition to problem sets, every day at MIT-WSP is packed with faculty lectures on math and physics, recitations, working on research projects, and tours of MIT campus labs. Scholars also attend daily college success workshops on topics such as note taking, time management, and applying to college. The schedule is meticulously mapped out — including travel times — from 0845 to 2200, Sunday through Friday.
Michael McDonald, an associate professor of physics at the Kavli Institute for Astrophysics and Space Research, and Navy veteran Nelson Olivier MBA ’17 have run the MIT-WSP program since its inception. At the time, WSP wanted to expand its STEM boot camps to other universities, so a Yale astrophysicist colleague recruited McDonald. Meanwhile, Olivier’s former Navy SEAL Team THREE teammate — who happened to be the WSP CEO — convinced Olivier to help launch the program while he was at the MIT Sloan School of Management, along with classmate Bill Kindred MBA ’17.
Now in its 10th year, MIT-WSP has hosted over 120 scholars, 93 percent of whom have gone on to attend schools like Stanford University, Georgetown University, University of Notre Dame, Harvard University, and the University of California at Berkeley. MIT-WSP alumni who have graduated now work at employers such as Meta, Price Waterhouse Coopers, Boeing, and BAE Systems.
Translating helicopter repairs to Newton’s laws
McDonald has a lot of fun teaching WSP scholars every summer. “When I pose a question to my first-year physics class in September, no one wants to meet my eyes or raise their hand for fear of embarrassing themselves,” he says. “But I ask a question to this group of, say, 12 vets, and 12 hands shoot up, they are all answering over each other, and then asking questions to follow up on the question. They are just curious and hungry, and they couldn’t care less about how they come off. … As a professor, it’s like your dream class.”
Every year, McDonald witnesses a predictable transformation among the scholars. They start off eager enough, however “by Tuesday, they are miserable, they’re pretty beaten down. But by the end of the week, they’re like, ‘I could do another week,’” he says.
Their confidence grows as they recognize that, while they may not have taken college courses, their military experience is invaluable. “It’s just a matter of convincing these guys that what they are already doing is what we are looking for. We have guys that say, ‘I don’t know if I can succeed in an engineering program,’ but then in the field, they are repairing helicopters. And I’m like, ‘Oh no, you can do this stuff!’ They just need to understand the background of why that helicopter that they are building works.”
Olivier agrees. “The enlisted veteran has a leg up because they’ve already done this before. They are just translating it from either fixing a radio or messing around with the components of a bomb to understanding Newton’s laws. That’s a thing of beauty, when you see that.”
Fostering a virtuous cycle
While just seeing themselves succeed at MIT-WSP helps instill confidence among scholars, meeting veterans who have made the leap into academia has a multiplier effect. To that end, the WSP organization provides each academic boot camp with alumni, called fellows, to teach college success workshops, provide support, and share their experiences in higher education.
“When I was at boot camp, we had two WSP fellows who were at Columbia, one at Princeton, and one who just got accepted to Harvard,” Cole recalls. “Just seeing people existing at these institutions made me realize, this is a thing that is doable.” The following summer, he became a fellow as well.
Former Marine Corps communications operator Aaron Kahler, who attended MIT-WSP in 2024, particularly recalls meeting a veteran PhD student while the group toured the neuroscience facility. “It was really cool seeing instances of successful vets doing their thing at MIT,” he says. “There were a lot more than we thought.”
Over the years, McDonald has made an effort to recruit more MIT veterans to staff the program. One of them is Andrea Henshall, a retired major in the Air Force and a PhD student in the Department of Aeronautics and Astronautics. After joining the Ask Me Anything panel a few years ago, she’s become increasingly involved, presenting lectures, mentoring participants, offering tours of the motion capture lab where she conducts experiments, and informally mentoring scholars.
“It’s so inspiring to hear so many students at the end of the week say, ‘I never considered a place like MIT until the boot camp, or until somebody told me, hey, you can be here, too.’ Or they see examples of enlisted veterans, like Justin, who’ve transitioned to a place like MIT and shown that it’s possible,” says Henshall.
At the conclusion of MIT-WSP, scholars receive a tangible reminder of what’s possible: a challenge coin designed by Olivier and McDonald. “In the military, the challenge coin usually has the emblem of the unit and symbolizes the ethos of the unit,” Olivier explains. On one side of the MIT-WSP coin are Newton’s laws of motion, superimposed over the WSP logo. MIT's “mens et manus” (“mind and hand”) motto appears on the other side, beneath an image of the Great Dome inscribed with the scholar’s name.
“As you go into Killian Court you see all the names of Pasteur, Newton, et cetera, but Building 10 doesn’t have a name on it,” he says. “So we say, ‘earn your space there on these buildings. Do something significant that will impact the human experience.’ And that’s what we think each one of these guys and gals can do.”
Kahler keeps the coin displayed on his desk at MIT, where he’s now a first-year student, for inspiration. “I don’t think I would be here if it weren’t for the Warrior-Scholar Project,” he says.
At MIT, a continued commitment to understanding intelligenceWith support from the Siegel Family Endowment, the newly renamed MIT Siegel Family Quest for Intelligence investigates how brains produce intelligence and how it can be replicated to solve problems.The MIT Siegel Family Quest for Intelligence (SQI), a research unit in the MIT Schwarzman College of Computing, brings together researchers from across MIT who combine their diverse expertise to understand intelligence through tightly coupled scientific inquiry and rigorous engineering. These researchers engage in collaborative efforts spanning science, engineering, the humanities, and more.
SQI seeks to comprehend how brains produce intelligence and how it can be replicated in artificial systems to address real-world problems that exceed the capabilities of current artificial intelligence technologies.
“In SQI, we are studying intelligence scientifically and generically, in the hope that by studying neuroscience and behavior in humans and animals, and also studying what we can build as intelligent engineering artifacts, we'll be able to understand the fundamental underlying principles of intelligence,” says Leslie Pack Kaelbling, SQI director of research and the Panasonic Professor in the MIT Department of Electrical Engineering and Computer Science.
“We in SQI believe that understanding human intelligence is one of the greatest open questions in science — right up there with the origin of the universe and our place in it, and the origin of life. The question of human intelligence has two parts: how it works, and where it comes from. If we understand those, we will see payoffs well beyond our current imaginings," says Jim DiCarlo, SQI director and the Peter de Florez Professor of Neuroscience in the MIT Department of Brain and Cognitive Sciences.
Exploring the great mysteries of the mind
The MIT Siegel Family Quest for Intelligence was recently renamed in recognition of a major gift from the Siegel Family Endowment that is enabling further growth in SQI’s research and activities.
SQI’s efforts are organized around missions — long-term, collaborative projects rooted in foundational questions about intelligence and supported by platforms — systems, and software that enable new research and create benchmarking and testing interfaces.
“Ours is the only unit at MIT dedicated to building a scientific understanding of intelligence while working with researchers across the entire Institute,” DiCarlo says. “There has been remarkable progress in AI over the past decade, but I believe the next decade will bring even greater advances in our understanding of human intelligence — advances that will reshape what we call AI. By supporting us, David Siegel, the Siegel Family Endowment, and our other donors are demonstrating their confidence in our approach."
A legacy of interdisciplinary support
In 2011, David Siegel SM ’86, PhD ’91 founded the Siegel Family Endowment (SFE) to support organizations working at the intersections of learning, workforce, and infrastructure. SFE funds organizations addressing society’s most critical challenges while supporting innovative civic and community leaders, social entrepreneurs, researchers, and others driving this work forward. Siegel is a computer scientist, entrepreneur, and philanthropist. While in graduate school at MIT’s Artificial Intelligence Lab, he worked on robotics in the group of Tomás Lozano-Pérez — currently the School of Engineering Professor of Teaching Excellence — focusing on sensing and grasping. Later, he co-founded Two Sigma with the belief that innovative technology, AI, and data science could help uncover value in the world’s data. Today, Two Sigma drives transformation across the financial services industry in investment management, venture capital, private equity, and real estate.
Siegel explains, “The human brain may very well be the most complex physical system in the universe, yet most people haven't shown much interest in how it works. People take the mind for granted, yet wonder so much about other scientific mysteries, such as the origin of the universe. My fascination with the brain and its intersection with artificial intelligence stems from this. I don’t care whether there are commercial applications for this quest; instead, we should pursue research like that done at the MIT Siegel Family Quest for Intelligence to advance our understanding of ourselves. As we uncover more about human intelligence, I am hopeful that we will lay the groundwork not only for advancing artificial intelligence but also for extending our own thinking.”
As a long-time champion of the Center for Brains, Minds, and Machines (CBMM), a National Science Foundation-funded collaborative interdisciplinary research thrust, and one of the first donors to the MIT Quest for Intelligence, David Siegel helped lay the foundation for the research underway today. In early 2024, he founded Open Athena, a nonprofit that bridges the gap between academic research and the cutting edge of AI. Open Athena equips universities with elite AI and data engineering talent to accelerate breakthrough discoveries at scale. Siegel serves on the MIT Corporation Executive Committee, is vice-chair of the Scratch Foundation, and is a member of the Cornell Tech Council. He also sits on the boards of Re:Build Manufacturing, Khan Academy, NYC FIRST, and Carnegie Hall.
A Catalyst for Global Collaboration
MIT President Sally Kornbluth says, “Of all the donors and supporters whose generosity fueled the Quest for Intelligence, no one has been more important from the beginning than David Siegel. Without his longstanding commitment to CBMM and his support for the Quest, this community might never have formed. There’s every reason to think that David’s recent gift, which renames the Quest for Intelligence and also supports the Schwarzman College of Computing, will be even more powerful in shaping the future of this initiative and of the field itself.” She continues, “Fueled by generous donors — particularly David Siegel’s transformative gift — SQI is poised to take on an even more important role.”
SQI scientists and engineers are presenting their work broadly, publishing papers, and developing new tools and technologies that are used in research institutions worldwide, as they engage with colleagues in disciplines across the Institute and in universities and institutions around the globe. DiCarlo explains, “We're part of the Schwarzman College of Computing, at the nexus between the people interested in biology and various forms of intelligence and the people interested in AI. We're working with partners at other universities, in nonprofits, and in industry — we can't do it alone.”
“Fundamentally, we're not an AI effort. We're a human intelligence effort using the tools of engineering,” DiCarlo says. “That gives us, among other things, very useful insights for human learning and health, but also very useful tools for AI — including AI that will just work a lot better in a human world.”
The entire SQI community of faculty, students, and staff is excited to face new challenges in the efforts to understand the fundamentals of intelligence.
New missions and next horizons
SQI research is broadening: Mission principal investigators are integrating their efforts across areas of interest, increasing their impact on the field. In the coming months, the organization plans to launch a new Social Intelligence Mission.
"We need to focus on problems that mirror natural and artificial intelligence — making sure that we are evaluating new models on tasks that mirror what humans and other natural intelligence can do,” says Nick Roy, SQI director of systems engineering and professor of aeronautics and astronautics at MIT. He predicts that SQI’s future research will rely on asking the right questions: “[While] we are good at picking tasks that test our computational models, and we're extremely good at picking tasks that kind of align with what our models can already do, we need to get better at choosing tasks and benchmarks that also elicit something about natural intelligence,” he says.
On November 24, 2025, faculty, staff, students, and supporters gathered at an event titled “The Next Horizon: Quest’s Future” to celebrate SQI’s next chapter. The event consisted of an afternoon of research updates, a panel discussion, and a poster session on new and evolving research, and was attended by David Siegel, representatives from the Siegel Family Endowment, and various members of the MIT Corporation. Recordings of the presentations from the event are available on SQI’s YouTube channel.
Chemists determine the structure of the fuzzy coat that surrounds Tau proteinsLearning more about this structure could help scientists find ways to block Tau from forming tangles in the brain of Alzheimer’s patients.One of the hallmarks of Alzheimer’s disease is the clumping of proteins called Tau, which form tangled fibrils in the brain. The more severe the clumping, the more advanced the disease is.
The Tau protein, which has also been linked to many other neurodegenerative diseases, is unstructured in its normal state, but in the pathological state it consists of a well-ordered rigid core surrounded by floppy segments. These disordered segments form a “fuzzy coat” that helps determine how Tau interacts with other molecules.
MIT chemists have now shown, for the first time, they can use nuclear magnetic resonance (NMR) spectroscopy to decipher the structure of this fuzzy coat. They hope their findings will aid efforts to develop drugs that interfere with Tau buildup in the brain.
“If you want to disaggregate these Tau fibrils with small-molecule drugs, then these drugs have to penetrate this fuzzy coat,” says Mei Hong, an MIT professor of chemistry and the senior author of the new study. “That would be an important future endeavor.”
MIT graduate student Jia Yi Zhang is the lead author of the paper, which appears today in the Journal of the American Chemical Society. Former MIT postdoc Aurelio Dregni is also an author of the paper.
Analyzing the fuzzy coat
In a healthy brain, Tau proteins help to stabilize microtubules, which give cells their structure. However, when Tau proteins become misfolded or otherwise altered, they form clumps that contribute to neurodegenerative diseases such as Alzheimer’s and frontotemporal dementia.
Determining the structure of the Tau tangles has been difficult because so much of the protein — about 80 percent — is found in the fuzzy coat, which tends to be highly disordered.
This fuzzy coat surrounds a rigid inner core that is made from folded protein strands known as beta sheets. Hong and her colleagues have previously analyzed the structure of the core in a particular Tau fibril using NMR, which can reveal the structures of molecules by measuring the magnetic properties of atomic nuclei within the molecules.
Until now, most researchers had overlooked Tau’s fuzzy coat and focused on the rigid core of the fibrils because those disordered segments change their structures so often that standard structure characterization techniques such as cryoelectron microscopy and X-ray crystallography can’t capture them.
However, in the new study, the researchers developed NMR techniques that allowed them to study the entire Tau protein. In one experiment, they were able to magnetize protons within the most rigid amino acids, then measure how long it took for the magnetization to be transferred to the mobile amino acids. This allowed them to track the magnetization as it traveled from rigid regions to floppy segments, and vice versa.
Using this approach, the researchers could estimate the proximity between the rigid and mobile segments. They complemented this experiment by measuring the different degrees of movement of the amino acids in the fuzzy coat.
“We have now developed an NMR-based technology to examine the fuzzy coat of a full-length Tau fibril, allowing us to capture both the dynamic regions and the rigid core,” Hong says.
Protein dynamics
For this particular fibril, the researchers showed that the overall structure of the Tau protein, which contains about 10 different domains, somewhat resembles a burrito, with several layers of the fuzzy coat wrapped around the rigid core.
Based on their measurements of protein dynamics, the researchers found that these segments fell into three categories. The rigid core of the fibril was surrounded by protein regions with intermediate mobility, whereas the most dynamic segments were found in the outermost layer.
The most dynamic segments of the fuzzy coat are rich in the amino acid proline. In the protein sequence, these prolines are near the amino acids that form the rigid core, and were previously thought to be partially immobilized. Instead, they are highly mobile, indicating that these positively charged proline-rich regions are repelled by the positive charges of the amino acids that form the rigid core.
This structural model gives insight into how Tau proteins form tangles in the brain, Hong says. Similar to how prions trigger healthy proteins to misfold in the brain, it is believed that misfolded Tau proteins latch onto normal Tau proteins and act as a template that induces them to adopt the abnormal structure.
In principle, these normal Tau proteins could add to the ends of existing short filaments or pile onto the sides. The fact that the fuzzy coat wraps around the rigid core indicates that normal Tau proteins more likely add onto the ends of the filaments to generate longer fibrils.
The researchers now plan to explore whether they can stimulate normal Tau proteins to assemble into the type of fibrils seen in Alzheimer’s disease, using misfolded Tau proteins from Alzheimer’s patients as a template.
The research was funded by the National Institutes of Health.
A protein found in the GI tract can neutralize many bacteriaThe protein, known as intelectin-2, also helps to strengthen the mucus barrier lining the digestive tract.The mucosal surfaces that line the body are embedded with defensive molecules that help keep microbes from causing inflammation and infections. Among these molecules are lectins — proteins that recognize microbes and other cells by binding to sugars found on cell surfaces.
One of these lectins, MIT researchers have found, has broad-spectrum antimicrobial activity against bacteria found in the GI tract. This lectin, known as intelectin-2, binds to sugar molecules found on bacterial membranes, trapping the bacteria and hindering their growth. Additionally, it can crosslink molecules that make up mucus, helping to strengthen the mucus barrier.
“What’s remarkable is that intelectin-2 operates in two complementary ways. It helps stabilize the mucus layer, and if that barrier is compromised, it can directly neutralize or restrain bacteria that begin to escape,” says Laura Kiessling, the Novartis Professor of Chemistry at MIT and the senior author of the study.
This kind of broad-spectrum antimicrobial activity could make intelectin-2 useful as a potential therapeutic, the researchers say. It could also be harnessed to help strengthen the mucus barrier in patients with disorders such as inflammatory bowel disease.
Amanda Dugan, a former MIT research scientist, and Deepsing Syangtan PhD ’24 are the lead authors of the paper, which appears today in Nature Communications.
A multifunctional protein
Current evidence suggests that the human genome encodes more than 200 lectins — carbohydrate-binding proteins that play a variety of roles in the immune system and in communication between cells. Kiessling’s lab, which has been exploring lectin-carbohydrate interactions, recently became interested in a family of lectins called intelectins. In humans, this family includes two lectins, intelectin-1 and intelectin-2.
Those two proteins have very similar structures, but intelectin-1 is distinctive in that it only binds to carbohydrates found in bacteria and other microbes. About 10 years ago, Kiessling and her colleagues were able to discover intelectin-1’s structure, but its functions are still not fully understood.
At that time, scientists hypothesized that intelectin-2 might play a role in immune defense, but there hadn’t been many studies to support that idea. Dugan, then a postdoc in Kiessling’s lab, set out to learn more about intelectin-2.
In humans, intelectin-2 is produced at steady levels by Paneth cells in the small intestine, but in mice, its expression from mucus-producing Goblet cells appears to be triggered by inflammation and certain types of parasitic infection.
In the new study, the researchers found that both human and mouse intelectin-2 bind to a sugar molecule called galactose. This sugar is commonly found in molecules called mucins that make up mucus. When intelectin-2 binds to these mucins, it helps to strengthen the mucus barrier, the researchers found.
Galactose is also found in carbohydrates displayed on the surfaces of some bacterial cells. The researchers showed that intelectin-2 can bind to microbes that display these sugars, including many pathogens that cause GI infections.
The researchers also found that over time, these trapped microbes eventually disintegrate, suggesting that the protein is able to kill them by disrupting their cell membranes. This antimicrobial activity appears to affect a wide range of bacteria, including some that are resistant to traditional antibiotics.
These dual functions help to protect the lining of the GI tract from infection, the researchers believe.
“Intelectin-2 first reinforces the mucus barrier itself, and then if that barrier is breached, it can control the bacteria and restrict their growth,” Kiessling says.
Fighting off infection
In patients with inflammatory bowel disease, intelectin-2 levels can become abnormally high or low. Low levels could contribute to degradation of the mucus barrier, while high levels could kill off too many beneficial bacteria that normally live in the gut. Finding ways to restore the correct levels of intelectin-2 could be beneficial for those patients, the researchers say.
“Our findings show just how critical it is to stabilize the mucus barrier. Looking ahead, we can imagine exploiting lectin properties to design proteins that actively reinforce that protective layer,” Kiessling says.
Because intelectin-2 can neutralize or eliminate pathogens such as Staphylococcus aureus and Klebsiella pneumoniae, which are often difficult to treat with antibiotics, it could potentially be adapted as an antimicrobial agent.
“Harnessing human lectins as tools to combat antimicrobial resistance opens up a fundamentally new strategy that draws on our own innate immune defenses,” Kiessling says. “Taking advantage of proteins that the body already uses to protect itself against pathogens is compelling and a direction that we are pursuing.”
The research was funded by the National Institutes of Health Glycoscience Common Fund, the National Institute of Allergy and Infectious Disease, the National Institute of General Medical Sciences, and the National Science Foundation.
Other authors who contributed to the study include Charles Bevins, a professor of medical microbiology and immunology at the University of California at Davis School of Medicine; Ramnik Xavier, a professor of medicine at Harvard Medical School and the Broad Institute of MIT and Harvard; and Katharina Ribbeck, the Andrew and Erna Viterbi Professor of Biological Engineering at MIT.
Eighteen MIT faculty honored as “Committed to Caring” for 2025-27The program recognizes outstanding mentorship of graduate students.At MIT, a strong spirit of mentorship shapes how students learn, collaborate, and imagine the future. In a time of accelerating change — from breakthroughs in artificial intelligence to the evolving realities of global research and work — guidance for technical challenges and personal growth is more important than ever.
The Committed to Caring (C2C) program recognizes the outstanding professors who extend this dedication beyond the classroom, nurturing resilience, curiosity, and compassion in a new generation of innovators. The latest cohort of C2C honorees exemplify these values, demonstrating the lasting impact that faculty can have on students’ academic and personal journeys.
The Committed to Caring program is a student-driven initiative that has celebrated exceptional mentorship since 2014. In this cycle, 18 MIT professors have been selected as recipients of the C2C award for 2025-27, joining the ranks of nearly 100 previous honorees.
The following faculty members comprise the 2025-27 Committed to Caring cohort:
Since its launch, the C2C program has placed students at the heart of its nomination process. Graduate students across all departments are invited to share letters recognizing faculty whose mentorship has made a lasting impact on their academic and personal journeys. A selection committee, consisting of both graduate students and staff, reviews nominations to identify those who have meaningfully strengthened the graduate community at MIT.
The selection committee this year included: Zoë Wright (Office of Graduate Education, or OGE), Ryan Rideau, Elizabeth Guttenberg (OGE), Beth Marois (OGE), Sharikka Finley-Moise (OGE), Indrani Saha (History, Theory, and Criticism of Art and Architecture, OGE), Chen Liang (graduate student, MIT Sloan School of Management), Jasmine Aloor (grad student, Department of Aeronautics and Astronautics), Leila Hudson (grad student, Department of Electrical Engineering and Computer Science), and Chair Suraiya Baluch (OGE).
“I wanted to be part of this committee after nominating my own professor in the last cycle, and the experience has been incredibly meaningful,” says Aloor. “I was continually amazed by the ways that so many professors show deep care for their students behind the scenes … What stood out to me most was the breadth of ways these faculty members support their students, check in on them, provide mentorship, and cultivate lifelong bonds, despite being successful and pressed for time as leaders at the top Institute in the world.”
Guttenberg agrees, saying, “Even when these gestures appear simple, they leave a profound and lasting impact on students’ lives and help cultivate the thriving academic community we value.”
Nomination letters illustrate how the efforts of these MIT faculty reflect a deep and enduring commitment to their students’ growth, well-being, and sense of purpose. Their advisees praise these educators for their consistent impact beyond lectures and labs, and for fostering inclusion, support, and genuine connection. Their care and guidance cultivates spaces where students are encouraged not only to excel academically, but also to develop confidence, balance, and a clearer vision of their goals.
Liang underlined that the selection experience “has shown me how many faculty at MIT … help students grow into thoughtful, independent researchers and, just as importantly, into fuller versions of themselves in the world.”
In the months ahead, a series of articles will showcase the honorees in pairs, with a reception this April to recognize their lasting impact. By highlighting these faculty, the Committed to Caring program continues to celebrate and strengthen MIT’s culture of mentorship, respect, and collaboration.
Celebrating worm scienceTime and again, an unassuming roundworm has illuminated aspects of biology with major consequences for human health.For decades, scientists with big questions about biology have found answers in a tiny worm. That worm — a millimeter-long creature called Caenorhabditis elegans — has helped researchers uncover fundamental features of how cells and organisms work. The impact of that work is enormous: Discoveries made using C. elegans have been recognized with four Nobel Prizes and have led to the development of new treatments for human disease.
In a perspective piece published in the November 2025 issue of the journal PNAS, 11 biologists including Robert Horvitz, the David H. Koch (1962) Professor of Biology at MIT, celebrate Nobel Prize-winning advances made through research in C. elegans. The authors discuss how that work has led to advances for human health, and highlight how a uniquely collaborative community among worm researchers has fueled the field.
MIT scientists are well represented in that community: The prominent worm biologists who coauthored the PNAS paper include former MIT graduate students Andrew Fire PhD ’83 and Paul Sternberg PhD ’84, now at Stanford University and Caltech, respectively; and two past members of Horvitz’s lab, Victor Ambros ’75, PhD ’79, who is now at the University of Massachusetts Medical School, and former postdoc Gary Ruvkun of Massachusetts General Hospital. Ann Rougvie at the University of Minnesota is the paper’s corresponding author.
“This tiny worm is beautiful — elegant both in its appearance and in its many contributions to our understanding of the biological universe in which we live,” says Horvitz, who in 2002 was awarded the Nobel Prize in Physiology or Medicine, along with colleagues Sydney Brenner and John Sulston, for discoveries that helped explain how genes regulate programmed cell death and organ development.
Early worm discoveries
Those discoveries were among the early successes in C. elegans research, made by pioneering scientists who recognized the power of the microscopic roundworm. C. elegans offers many advantages for researchers: The worms are easy to grow and maintain in labs; their transparent bodies make cells and internal processes readily visible under a microscope; they are cellularly very simple (e.g., they have only 302 nerve cells, compared with about 100 billion in a human) and their genomes can be readily manipulated to study gene function.
Most importantly, many of the molecules and processes that operate in C. elegans have been retained throughout evolution, meaning discoveries made using the worm can have direct relevance to other organisms, including humans.
“Many aspects of biology are ancient and evolutionarily conserved,” Horvitz, who is also a member of MIT’s McGovern Institute for Brain Research and Koch Institute for Integrative Cancer Research, as well as an investigator at the Howard Hughes Medical Institute. “Such shared mechanisms can be most readily revealed by analyzing organisms that are highly tractable in the laboratory.”
In the 1960s, Brenner, a molecular biologist who was curious about how animals’ nervous systems develop and function, recognized that C. elegans offered unique opportunities to study these processes. Once he began developing the worm into a model for laboratory studies, it did not take long for other biologists to join him to take advantage of the new system.
In the 1970s, the unique features of the worm allowed Sulston to track the transformation of a fertilized egg into an adult animal, tracing the origins of each of the adult worm’s 959 cells. His studies revealed that in every developing worm, cells divide and mature in predictable ways. He also learned that some of the cells created during development do not survive into adulthood, and are instead eliminated by a process termed programmed cell death.
By seeking mutations that perturbed the process of programmed cell death, Horvitz and his colleagues identified key regulators of that process, which is sometimes referred to as apoptosis. These regulators, which both promote and oppose apoptosis, turned out to be vital for programmed cell death across the animal kingdom.
In humans, apoptosis shapes developing organs, refines brain circuits, and optimizes other tissue structures. It also modulates our immune systems and eliminates cells that are in danger of becoming cancerous. The human version of CED-9, the anti-apoptotic regulator that Horvitz’s team discovered in worms, is BCL-2. Researchers have shown that activating apoptotic cell death by blocking BCL-2 is an effective treatment for certain blood cancers. Today, researchers are also exploring new ways of treating immune disorders and neurodegenerative disease by manipulating apoptosis pathways.
Collaborative worm community
Horvitz and his colleagues’ discoveries about apoptosis helped demonstrate that understanding C. elegans biology has direct relevance to human biology and disease. Since then, a vibrant and closely connected community of worm biologists — including many who trained in Horvitz’s lab — has continued to carry out impactful work. In their PNAS article, Horvitz and his coauthors highlight that early work, as well as the Nobel Prize-winning work of:
Horvitz and his coauthors stress that while the worm itself made these discoveries possible, so too did a host of resources that facilitate collaboration within the worm community and enable its scientists to build upon the work of others. Scientists who study C. elegans have embraced this open, collaborative spirit since the field’s earliest days, Horvitz says, citing the Worm Breeder’s Gazette, an early newsletter where scientists shared their observations, methods, and ideas.
Today, scientists who study C. elegans — whether the organism is the centerpiece of their lab or they are looking to supplement studies of other systems — contribute to and rely on online resources like WormAtlas and WormBase, as well as the Caenorhabditis Genetics Center, to share data and genetic tools. Horvitz says these resources have been crucial to his own lab’s work; his team uses them every day.
Just as molecules and processes discovered in C. elegans have pointed researchers toward important pathways in human cells, the worm has also been a vital proving ground for developing methods and approaches later deployed to study more complex organisms. For example, C. elegans, with its 302 neurons, was the first animal for which neuroscientists successfully mapped all of the connections of the nervous system. The resulting wiring diagram, or connectome, has guided countless experiments exploring how neurons work together to process information and control behavior. Informed by both the power and limitations of the C. elegans’ connectome, scientists are now mapping more complex circuitry, such as the 139,000-neuron brain of the fruit fly, whose connectome was completed in 2024.
C. elegans remains a mainstay of biological research, including in neuroscience. Scientists worldwide are using the worm to explore new questions about neural circuits, neurodegeneration, development, and disease. Horvitz’s lab continues to turn to C. elegans to investigate the genes that control animal development and behavior. His team is now using the worm to explore how animals develop a sense of time and transmit that information to their offspring.
Also at MIT, Steven Flavell’s team in the Department of Brain and Cognitive Sciences and The Picower Institute for Learning and Memory is using the worm to investigate how neural connectivity, activity, and modulation integrate internal states, such as hunger, with sensory information, such as the smell of food, to produce sometimes long-lasting behaviors. (Flavell is Horvitz’s academic grandson, as Flavell trained with one of Horvitz’s postdoctoral trainees.)
As new technologies accelerate the pace of scientific discovery, Horvitz and his colleagues are confident that the humble worm will bring more unexpected insights.
New research may help scientists predict when a humid heat wave will breakAs these events become more common at midlatitudes, a phenomenon called an atmospheric inversion will determine how long they last.A long stretch of humid heat followed by intense thunderstorms is a weather pattern historically seen mostly in and around the tropics. But climate change is making humid heat waves and extreme storms more common in traditionally temperate midlatitude regions such as the midwestern U.S., which has seen episodes of unusually high heat and humidity in recent summers.
Now, MIT scientists have identified a key condition in the atmosphere that determines how hot and humid a midlatitude region can get, and how intense related storms can become. The results may help climate scientists gauge a region’s risk for humid heat waves and extreme storms as the world continues to warm.
In a study appearing this week in the journal Science Advances, the MIT team reports that a region’s maximum humid heat and storm intensity are limited by the strength of an “atmospheric inversion”— a weather condition in which a layer of warm air settles over cooler air.
Inversions are known to act as an atmospheric blanket that traps pollutants at ground level. Now, the MIT researchers have found atmospheric inversions also trap and build up heat and moisture at the surface, particularly in midlatitude regions. The more persistent an inversion, the more heat and humidity a region can accumulate at the surface, which can lead to more oppressive, longer-lasting humid heat waves.
And, when an inversion eventually weakens, the accumulated heat energy is released as convection, which can whip up the hot and humid air into intense thunderstorms and heavy rainfall.
The team says this effect is especially relevant for midlatitude regions, where atmospheric inversions are common. In the U.S., regions to the east of the Rocky Mountains often experience inversions of this kind, with relatively warm air aloft sitting over cooler air near the surface.
As climate change further warms the atmosphere in general, the team suspects that inversions may become more persistent and harder to break. This could mean more frequent humid heat waves and more intense storms for places that are not accustomed to such extreme weather.
“Our analysis shows that the eastern and midwestern regions of U.S. and the eastern Asian regions may be new hotspots for humid heat in the future climate,” says study author Funing Li, a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS).
“As the climate warms, theoretically the atmosphere will be able to hold more moisture,” adds co-author and EAPS Assistant Professor Talia Tamarin-Brodsky. “Which is why new regions in the midlatitudes could experience moist heat waves that will cause stress that they weren’t used to before.”
Air energetics
The atmosphere’s layers generally get colder with altitude. In these typical conditions, when a heat wave comes through a region, it warms the air at ground level. Since warm air is lighter than cold air, it will eventually rise, like a hot air balloon, prompting colder air to sink. This rise and fall of air sets off convection, like bubbles in boiling water. When warm air hits colder altitudes, it condenses into droplets that rain out, typically as a thunderstorm, that can often relieve a heat wave.
For their new study, Li and Tamarin-Brodsky wondered: What would it take to get air at the surface to convect and ultimately end a heat wave? Put another way: What sets the limit to how hot a region can get before air begins to convect to eventually rain?
The team treated the question as a problem of energy. Heat is energy that can be thought of in two forms: the energy that comes from dry heat (i.e., temperature), and the energy that comes from latent, or moist, heat. The scientists reasoned that, for a given portion or “parcel” of air, there is some amount of moisture that, when condensed, contributes to that air parcel’s total energy. Depending on how much energy an air parcel has, it could start to convect, rise up, and eventually rain out.
“Imagine putting a balloon around a parcel of air and asking, will it stay in the same place, will it go up, or will it sink?” Tamarin-Brodsky says. “It’s not just about warm air that’s lifting. You also have to think about the moisture that’s there. So we consider the energetics of an air parcel while taking into account the moisture in that air. Then we can find the maximum ‘moist energy’ that can accumulate near the surface before the air becomes unstable and convects.”
Heat barrier
As they worked through their analysis, the researchers found that the maximum amount of moist energy, or the highest level of heat and humidity that the air can hold, is set by the presence and strength of an atmospheric inversion. In cases where atmospheric layers are inverted (when a layer of warm or light air settles over colder or heavier, ground-level air), the air has to accumulate more heat and moisture in order for an air parcel to build up enough energy to lift up and break through the inversion layer. The more persistent the inversion is, the hotter and more humid air must get before it can rise up and convect.
Their analysis suggests that an atmospheric inversion can increase a region’s capacity to hold heat and humidity. How high this heat and humidity can get depends on how stable the inversion is. If a blanket of warm air parks over a region without moving, it allows more humid heat to build up, versus if the blanket is quickly removed. When the air eventually convects, the accumulated heat and moisture will generate stronger, more intense storms.
“This increasing inversion has two effects: more severe humid heat waves, and less frequent but more extreme convective storms,” Tamarin-Brodsky says.
Inversions in the atmosphere form in various ways. At night, the surface that warmed during the day cools by radiating heat to space, making the air in contact with it cooler and denser than the air above. This creates a shallow layer in which temperature increases with height, called a nocturnal inversion. Inversions can also form when a shallow layer of cool marine air moves inland from the ocean and slides beneath warmer air over the land, leaving cool air near the surface and warmer air above. In some cases, persistent inversions can form when air heated over sun-warmed mountains is carried over colder low-lying regions, so that a warm layer aloft caps cooler air near the ground.
“The Great Plains and the Midwest have had many inversions historically due to the Rocky Mountains,” Li says. “The mountains act as an efficient elevated heat source, and westerly winds carry this relatively warm air downstream into the central and midwestern U.S., where it can help create a persistent temperature inversion that caps colder air near the surface.”
“In a future climate for the Midwest, they may experience both more severe thunderstorms and more extreme humid heat waves,” Tamarin-Brodsky says. “Our theory gives an understanding of the limit for humid heat and severe convection for these communities that will be future heat wave and thunderstorm hotspots.”
This research is part of the MIT Climate Grand Challenge on Weather and Climate Extremes. Support was provided by Schmidt Sciences.
MIT in the media: 2025 in review MIT community members made headlines with key research advances and their efforts to tackle pressing challenges.“At MIT, innovation ranges from awe-inspiring technology to down-to-Earth creativity,” noted Chronicle, during a campus visit this year for an episode of the program. In 2025, MIT researchers made headlines across print publications, podcasts, and video platforms for key scientific advances, from breakthroughs in quantum and artificial intelligence to new efforts aimed at improving pediatric health care and cancer diagnosis.
MIT faculty, researchers, students, alumni and staff helped demystify new technologies, highlighted the practical hands-on learning the Institute is known for, and shared what inspires their research with viewers, readers and listeners around the world. Below is a sampling of news moments to revisit.
Let’s take a closer look at MIT: It’s alarming to see such a complex, important institution subject to the whims of today’s politics
Washington Post columnist George F. Will reflects on MIT and his view of “the damage that can be done to America’s meritocracy by policies motivated by hostility toward institutions vital to it.” Will notes that MIT has an “astonishing economic multiplier effect: MIT graduates have founded companies that have generated almost $1.9 trillion in annual revenue (a sum almost equal to Russia’s GDP) and 4.6 million jobs.”
Full story via The Washington Post
At MIT, groundbreaking ideas blend science and breast cancer detection innovation
Chronicle visited MIT this spring to learn more about how the Institute “nurtures groundbreaking efforts, reminding us that creativity and science thrive together, inspiring future advancements in engineering, medicine, and beyond.”
Full story via Chronicle
New MIT provost looks to build more bridges with CEOs
Provost Anantha Chandrakasan shares his energy and enthusiasm for MIT, and his goals for the Institute.
Full story via The Boston Globe
Five things New England researchers helped develop with federal funding
Professors John Guttag and David Mindell discuss MIT’s long history of developing foundational technologies — including the internet and the first widely used electronic navigation system — with the support of federal funding.
Full story via The Boston Globe
Bostonians of the Year 2025: First responders, university presidents, and others who exemplified courage
President Sally Kornbluth is honored by The Boston Globe as one of the Bostonians of the Year, a list that spotlights individuals across the region who, in choosing the difficult path, “showed us what strength looks like.” Kornbluth was recognized for her work being of the “most prominent voices rallying to protect academic freedom.”
Full story via The Boston Globe
Practical education and workforce preparation
College students flock to a new major: AI
MIT’s new Artificial Intelligence and Decision Making major is aimed at teaching students to “develop AI systems and study how technologies like robots interact with humans and the environment.”
Full story via New York Times
50 colleges with the best ROI
MIT has been named among the top colleges in the country for return on investment. MIT “is need-blind and full-need for undergraduate students. Six out of 10 students receive financial aid, and almost 88% of the Class of 2025 graduated debt-free.”
Full story via Boston 25
Desirée Plata: Chemist, oceanographer, engineer, entrepreneur
Professor Desirée Plata explains that she is most proud of her work as an educator. “The faculty of the world are training the next generation of researchers,” says Plata. “We need a trained workforce. We need patient chemists who want to solve important problems.”
Full story via Chemical & Engineering News
Taking a quantum leap
MIT launches quantum initiative to tackle challenges in science, health care, national security
MIT is “taking a quantum leap with the launch of the new MIT Quantum Initiative (QMIT). “There isn't a more important technological field right now than quantum with its enormous potential for impact on both fundamental research and practical problems,” said President Sally Kornbluth.
Full story via State House News Service
Peter Shor on how quantum tech can help climate
Professor Peter Shor helps disentangle quantum technologies.
Full story via The Quantum Kid
MIT researchers develop device to enable direct communication between multiple quantum processors
MIT researchers made a key advance in the creation of a practical quantum computer.
Full story via Military & Aerospace Electronics
Fortifying national security and aiding disaster response
Nano-material breakthrough could revolutionize night vision
MIT researchers developed “a new way to make large ultrathin infrared sensors that don’t need cryogenic cooling and could radically change night vision for the military.”
Full story via Defense One
MIT researchers develop robot designed to help first-responders in disaster situations
Researchers at MIT engineered SPROUT (Soft Pathfinding Robotic Observation Unit), a robot aimed at assisting first-responders.
Full story via WHDH
MIT scientists make “smart” clothes that warn you when you’re sick
As part of an effort to help keep service members safe, MIT scientists created a programmable fiber that can be stitched into clothing to help monitor the wearer’s health.
Full story via FOX 28
MIT Lincoln Lab develops ocean-mapping technology
MIT Lincoln Laboratory researchers are developing “automated electric vessels to map the ocean floor and improve search and rescue missions.”
Full story via Chronicle
Transformative tech
This MIT scientist is rewiring robots to keep the humanity in tech
Professor Daniela Rus, director of the Computer Science and Artificial Intelligence Lab, discusses her work revolutionizing the field of robotics by bringing “empathy into engineering and proving that responsibility is as radical and as commercially attractive as unguarded innovation.”
Full story via Forbes
Watch this tiny robot somersault through the air like an insect
Professor Kevin Chen designed a tiny, insect-sized aerial microrobot.
Full story via Science
It's actually really hard to make a robot, guys
Professor Pulkit Agrawal delves into his work engineering a simulator that can be used to train robots.
Full story via NPR
Shape-shifting fabrics and programmable materials redefine design at MIT
Associate Professor Skylar Tibbits is embedding intelligence into the materials around us, while Professor Caitlin Mueller and Sandy Curth PhD ’25 are digging into eco-friendly construction.
Full story via Chronicle
Building a healthier future
MIT launches pediatric research hub to address access gaps
The Hood Pediatric Innovation Hub is addressing “underinvestment in pediatric healthcare innovations.”
Full story via Boston Business Journal
Bionic knee helps amputees walk naturally again
Professor Hugh Herr developed a prosthetic that could increase mobility for above-the-knee amputees. “The bionic knee developed by MIT doesn’t just restore function, it redefines it.”
Full story via Fox News
MIT drug hunters are using AI to design completely new antibiotics
Professor James Collins is using AI to develop new compounds to combat antibiotic resistance.
Full story via Fast Company
Innovative once-weekly capsule helps quell schizophrenia symptoms
A new pill from the lab of Associate Professor Giovanni Traverso “can greatly simplify the drug schedule faced by schizophrenia patients.”
Full story via Newsmax
Renewing American manufacturing
US manufacturing is in “pretty bad shape.” MIT hopes to change that.
MIT launched the Initiative for New Manufacturing to help “build the tools and talent to shape a more productive and sustainable future for manufacturing.”
Full story via Manufacturing Dive
Giving US manufacturing a boost
Ben Armstrong of the MIT Industrial Performance Center discusses how to reinvigorate manufacturing in America.
Full story via Marketplace
New England companies are sparking an industrial revolution. Here’s how to harness it.
Professor David Mindell spotlights how “a new wave of industrial companies, many in New England, are leveraging new technologies to create jobs and empower workers.”
Full story via The Boston Globe
Improving aging
My day as an 80-year-old. What an age-simulation suit taught me.
To get a better sense of the experience of aging, Wall Street Journal reporter Amy Dockser Marcus donned the MIT AgeLab’s age-simulation suit and embarked on multiple activities.
Full story via The Wall Street Journal
New mobile robot helps seniors walk safely and prevent falls
A mobile robot created by MIT engineers is designed to help prevent falls. “It's easy to see how something like this could make a big difference for seniors wanting to stay independent.”
Full story via Fox News
The senior population is booming. Caregiving is struggling to keep up
Professor Jonathan Gruber discusses the labor shortages impacting senior care.
Full story via CNBC
Upping our energy resilience
New MIT collaboration with GE Vernova aims to accelerate energy transition
“A great amount of innovation happens in academia. We have a longer view into the future,” says Provost Anantha Chandrakasan of the MIT-GE Vernova Energy and Climate Alliance.
Full story via The Boston Globe
The environmental impacts of generative AI
Noman Bashir, a fellow with MIT’s Climate and Sustainability Consortium, explores the environmental impacts of generative AI.
Full story via Fox 13
Is the clean energy economy doomed?
Professor Christopher Knittel discusses how the U.S. can be in the best position for global energy dominance.
Full story via Marketplace
Advancing American workers
WTH can we do to prevent a second China shock? Professor David Autor explains
Professor David Autor shares his research examining the long-term impact of China entering the World Trade Organization, how the U.S. can protect vital industries from unfair trade practices, and the potential impacts of AI on workers.
Full story via American Enterprise Institute
The fight over robots threatening American jobs
Professor Daron Acemoglu highlights the economic and societal implications of integrating automation in the workforce, advocating for policies aimed at assisting workers.
Full story via Financial Times
Moving toward automation
Research Scientist Eva Ponce of the MIT Center for Transportation and Logistics notes that robotics and AI technologies are “replacing some jobs — particularly more manual tasks including heavy lifting — but have also offered new opportunities within warehouse operations.”
Full story via Financial Times
Planetary defense and out-of-this world exploration
MIT researchers create new asteroid detection methods to help protect Earth
Associate Professor Julien de Wit and Research Scientist Artem Burdanov discuss their work developing a new method to track asteroids that could impact Earth.
Full story via WBZ Radio
What happens to the bodies of NASA astronauts returning to Earth?
Professor Dava Newman speaks about how long-duration stays in space can affect the human body.
Full story via News Nation
Lunar lander Athena is packed and ready to explore the moon. Here’s what on board
MIT engineers sent three payloads into space on a course set for the moon’s south polar region.
Full story via USA Today
Scanning the heavens at the Vatican Observatory
Br. Guy Consolmagno '74, SM '75, director of the Vatican Observatory, and graduate student Isabella Macias share their experiences studying astronomy and planetary formation at the Vatican Observatory. “The Vatican has such a deep, rich history of working with astronomers,” says Macias. “It shows that science is not only for global superpowers around the world, but it's for students, it's for humanity.”
Full story via CBS News Sunday Morning
The story of real-life rocket scientists
Professor Kerri Cahoy takes viewers on an out-of-this-world journey into how a college internship inspired her research on space and satellites.
Full story via Bloomberg Television
On the air
While digital currency initiatives expand, we ask: What’s the future of cash?
Neha Narula, director of the MIT Digital Currency Initiative, examines the future of cash as the use of digital currencies expands.
Full story via USA Today
The high stakes of the AI economy
Professor Asu Ozdaglar, head of the Department of Electrical Engineering and Computer Science and deputy dean of the MIT Schwarzman College of Computing, explores AI’s opportunities and risks — and whether it can be regulated without stifling progress.
Full story via Is Business Broken?
The LIGO Lab is pushing the boundaries of gravitational-wave research
Associate Professor Matt Evans explores the future of gravitational wave research and how Cosmic Explorer, the next-generation gravitational wave observatory, will help unearth secrets of the early universe.
Full story via Scientific American
Space junk: The impact of global warming on satellites
Graduate student Will Parker discusses his research examining the impact of climate change on satellites.
Full story via USA Today
Endometriosis is common. Why is getting diagnosed so hard?
Professor Linda Griffith shares her work studying endometriosis and her efforts to improve healthcare for women.
Full story via Science Friday
There’s nothing small about this nanoscale research
Professor Vladimir Bulović takes listeners on a tour of MIT.nano, MIT’s “clean laboratory facility that is critical to nanoscale research, from microelectronics to medical nanotechnology.”
Full story via Scientific American
Marrying science and athletics
The MIT scientist behind the “torpedo bats” that are blowing up baseball
Aaron Leanhardt PhD ’03 went from an MIT graduate student who was part of a research team that “cooled sodium gas to the lowest temperature ever recorded in human history” to inventor of the torpedo baseball bat, “perhaps the most significant development in bat technology in decades.”
Full story via The Wall Street Journal
Engineering athletes redefine routine
After suffering a concussion during her sophomore year, Emiko Pope ’25 was inspired to explore the effectiveness of concussion headbands.
Full story via American Society of Mechanical Engineers
“I missed talking math with people”: why John Urschel left the NFL for MIT
Assistant Professor John Urschel shares his decision to call an audible and leave his NFL career to focus on his love for math at MIT.
Full story via The Guardian
Making a statement, MIT’s football team dons extra head padding for safety
It’s a piece of equipment that may become more widely used as research continues into its effectiveness — including from at least one of the players on the current team.
Full story via GBH Morning Edition
Agricultural efficiency
New MIT breakthrough could save farmers billions on pesticides
MIT engineers developed a system that helps pesticides adhere more effectively to plant leaves, allowing farmers to use fewer chemicals.
Full story via Michigan Farm News
Bug-sized robots could help pollination on future farms
Insect-sized robots crafted by MIT researchers could one day be used to help with farming practices like artificial pollination.
Full story via Reuters
See how MIT researchers harvest water from the air
An ultrasonic device created by MIT engineers can extract clean drinking water from atmospheric moisture.
Full story via CNN
Appreciating art
Meet the engineer using deep learning to restore Renaissance art
Graduate student Alex Kachkine talks about his work applying AI to develop a restoration method for damaged artwork.
Full story via Nature
MIT’s Linde Music Building opens with a free festival
“The extent of art-making on the MIT campus is equal to that of a major city,” says Institute Professor Marcus Thompson. “It’s a miracle that it’s all right here, by people in science and technology who are absorbed in creating a new world and who also value the past, present and future of music and the arts.”
Full story via Cambridge Day
“Remembering the Future” on display at the MIT Museum
The “Remembering the Future” exhibit at the MIT Museum features a sculptural installation that uses “climate data from the last ice age to the present, as well as projected future environments, to create a geometric design.”
Full story via The New York Times
In 2025, MIT maintained its standard of community and research excellence amidst a shift in national priorities regarding the federal funding of higher education. Notably, QS ranked MIT No. 1 in the world for the 14th straight year, while U.S. News ranked MIT No. 2 in the nation for the 5th straight year.
This year, President Sally Kornbluth also added to the Institute’s slate of community-wide strategic initiatives, with new collaborative efforts focused on manufacturing, generative artificial intelligence, and quantum science and engineering. In addition, MIT opened several new buildings and spaces, hosted a campuswide art festival, and continued its tradition of bringing the latest in science and technology to the local community and to the world. Here are some of the top stories from around MIT over the past 12 months.
MIT collaboratives
President Kornbluth announced three new Institute-wide collaborative efforts designed to foster and support alliances that will take on global problems. The Initiative for New Manufacturing (INM) will work toward bolstering industry and creating jobs by driving innovation across vital manufacturing sectors. The MIT Generative AI Impact Consortium (MGAIC), a group of industry leaders and MIT researchers, aims to harness the power of generative artificial intelligence for the good of society. And the MIT Quantum Initiative (QMIT) will leverage quantum breakthroughs to drive the future of scientific and technological progress.
These missions join three announced last year — the Climate Project at MIT, the MIT Human Insight Collaborative (MITHIC), and the MIT Health and Life Sciences Collaborative (MIT HEALS).
Sharing the wonders of science and technology
This year saw the launch of MIT Learn, a dynamic AI-enabled website that hosts nearly 13,000 non-degree learning opportunities, making it easier for learners around the world to discover the courses and resources available on MIT’s various learning platforms.
The Institute also hosted the Cambridge Science Carnival, a hands-on event managed by the MIT Museum that drew approximately 20,000 attendees and featured more than 140 activities, demonstrations, and installations tied to the topics of science, technology, engineering, arts, and mathematics (STEAM).
Commencement
At Commencement, Hank Green urged MIT’s newest graduates to focus their work on the “everyday solvable problems of normal people,” even if it is not always the easiest or most obvious course of action. Green is a popular content creator and YouTuber whose work often focuses on science and STEAM issues, and who co-created the educational media company Complexly.
President Kornbluth challenged graduates to be “ambassadors” for the open-minded inquiry and collaborative work that marks everyday life at MIT.
Top accolades
In January, the White House bestowed national medals of science and technology — the country’s highest awards for scientists and engineers — on four MIT professors and an additional alumnus. Moderna, with deep MIT roots, was also recognized.
As in past years, MIT faculty, staff, and alumni were honored with election to the various national academies: the National Academy of Sciences, the National Academy of Engineering, the National Academy of Medicine, and the National Academy of Inventors.
Faculty member Carlo Ratti served as curator of the Venice Biennale’s 19th International Architecture Exhibition.
Members of MIT Video Productions won a New England Emmy Award for their short film on the art and science of hand-forged knives with master bladesmith Bob Kramer.
And at MIT, Dimitris Bertsimas, vice provost for open learning and a professor of operations research, won this year’s Killian Award, the Institute’s highest faculty honor.
New and refreshed spaces
In the heart of campus, the Edward and Joyce Linde Music Building became fully operational to start off the year. In celebration, the Institute hosted Artfinity, a vibrant multiweek exploration of art and ideas, with more than 80 free performing and visual arts events including a film festival, interactive augmented-reality art installations, a simulated lunar landing, and concerts by both student groups and internationally renowned musicians.
Over the summer, the “Outfinite” — the open space connecting Hockfield Court with Massachusetts Avenue — was officially named the L. Rafael Reif Innovation Corridor in honor of President Emeritus L. Rafael Reif, MIT’s 17th president.
And in October, the Undergraduate Advising Center’s bright new home opened in Building 11 along the Infinite Corridor, bringing a welcoming and functional destination for MIT undergraduate students within the Institute’s Main Group.
Student honors and awards
MIT undergraduates earned an impressive number of prestigious awards in 2025. Exceptional students were honored with Rhodes, Gates Cambridge, and Schwarzman scholarships, among others.
A number of MIT student-athletes also helped to secure their team’s first NCAA national championship in Institute history: Women’s track and field won both the indoor national championship and outdoor national championship, while women’s swimming and diving won the national title as well.
Also for the fifth year in a row, MIT students earned all five top spots at the Putnam Mathematical Competition.
Leadership transitions
Several senior administrative leaders took on new roles in 2025. Anantha Chandrakasan was named provost; Paula Hammond was named dean of the School of Engineering; Richard Locke was named dean of the MIT Sloan School of Management; Gaspare LoDuca was named vice president for information systems and technology and CIO; Evelyn Wang was named vice president for energy and climate; and David Darmofal was named vice chancellor for undergraduate and graduate education.
Additional new leadership transitions include: Ana Bakshi was named executive director of the Martin Trust Center for MIT Entrepreneurship; Fikile Brushett was named director of the David H. Koch School of Chemical Engineering Practice; Laurent Demanet was named co-director of the Center for Computational Science and Engineering; Rohit Karnik was named director of the Abdul Latif Jameel Water and Food Systems Lab; Usha Lee McFarling was named director of the Knight Science Journalism Program; C. Cem Tasan was named director of the Materials Research Laboratory; and Jessika Trancik was named director of the Sociotechnical Systems Research Center.
Remembering those we lost
Among MIT community members who died this year were David Baltimore, Juanita Battle, Harvey Kent Bowen, Stanley Fischer, Frederick Greene, Lee Grodzins, John Joannopoulos, Keith Johnson, Daniel Kleppner, Earle Lomon, Nuno Loureiro, Victor K. McElheny, David Schmittlein, Anthony Sinskey, Peter Temin, Barry Vercoe, Rainer Weiss, Alan Whitney, and Ioannis Yannas.
In case you missed it…
Additional top stories from around the Institute in 2025 include a description of the environmental and sustainability implications of generative AI tech and applications; the story of how an MIT professor introduced hundreds of thousands of students to neuroscience with his classic textbook; a look at how MIT entrepreneurs are using AI; a roundup of new books by MIT faculty and staff; the selection of an MIT alumnus as a NASA astronaut candidate; the signing of an MIT student-athlete by the Los Angles Dodgers; and behind the scenes with MIT students who cracked a longstanding egg dilemma.
MIT’s top research stories of 2025Concrete batteries, AI-developed antibiotics, the ozone’s recovery, and a more natural bionic knee were some of the most popular topics on MIT News.In 2025, MIT’s research community had another prolific year filled with exciting scientific and technological advances. To celebrate the achievements of the past 12 months, MIT News highlights some of our most-read stories from this year.
One of the biggest risk factors for developing liver cancer is a high-fat diet. A new study from MIT reveals how a fatty diet rewires liver cells and makes them more prone to becoming cancerous.
The researchers found that in response to a high-fat diet, mature hepatocytes in the liver revert to an immature, stem-cell-like state. This helps them to survive the stressful conditions created by the high-fat diet, but in the long term, it makes them more likely to become cancerous.
“If cells are forced to deal with a stressor, such as a high-fat diet, over and over again, they will do things that will help them survive, but at the risk of increased susceptibility to tumorigenesis,” says Alex K. Shalek, director of the Institute for Medical Engineering and Sciences (IMES), the J. W. Kieckhefer Professor in IMES and the Department of Chemistry, and a member of the Koch Institute for Integrative Cancer Research at MIT, the Ragon Institute of MGH, MIT, and Harvard, and the Broad Institute of MIT and Harvard.
The researchers also identified several transcription factors that appear to control this reversion, which they believe could make good targets for drugs to help prevent tumor development in high-risk patients.
Shalek; Ömer Yilmaz, an MIT associate professor of biology and a member of the Koch Institute; and Wolfram Goessling, co-director of the Harvard-MIT Program in Health Sciences and Technology, are the senior authors of the study, which appears today in Cell. MIT graduate student Constantine Tzouanas, former MIT postdoc Jessica Shay, and Massachusetts General Brigham postdoc Marc Sherman are the co-first authors of the paper.
Cell reversion
A high-fat diet can lead to inflammation and buildup of fat in the liver, a condition known as steatotic liver disease. This disease, which can also be caused by a wide variety of long-term metabolic stresses such as high alcohol consumption, may lead to liver cirrhosis, liver failure, and eventually cancer.
In the new study, the researchers wanted to figure out just what happens in cells of the liver when exposed to a high-fat diet — in particular, which genes get turned on or off as the liver responds to this long-term stress.
To do that, the researchers fed mice a high-fat diet and performed single-cell RNA-sequencing of their liver cells at key timepoints as liver disease progressed. This allowed them to monitor gene expression changes that occurred as the mice advanced through liver inflammation, to tissue scarring and eventually cancer.
In the early stages of this progression, the researchers found that the high-fat diet prompted hepatocytes, the most abundant cell type in the liver, to turn on genes that help them survive the stressful environment. These include genes that make them more resistant to apoptosis and more likely to proliferate.
At the same time, those cells began to turn off some of the genes that are critical for normal hepatocyte function, including metabolic enzymes and secreted proteins.
“This really looks like a trade-off, prioritizing what’s good for the individual cell to stay alive in a stressful environment, at the expense of what the collective tissue should be doing,” Tzouanas says.
Some of these changes happened right away, while others, including a decline in metabolic enzyme production, shifted more gradually over a longer period. Nearly all of the mice on a high-fat diet ended up developing liver cancer by the end of the study.
When cells are in a more immature state, it appears that they are more likely to become cancerous if a mutation occurs later on, the researchers say.
“These cells have already turned on the same genes that they’re going to need to become cancerous. They’ve already shifted away from the mature identity that would otherwise drag down their ability to proliferate,” Tzouanas says. “Once a cell picks up the wrong mutation, then it’s really off to the races and they’ve already gotten a head start on some of those hallmarks of cancer.”
The researchers also identified several genes that appear to orchestrate the changes that revert hepatocytes to an immature state. While this study was going on, a drug targeting one of these genes (thyroid hormone receptor) was approved to treat a severe form of steatotic liver disease called MASH fibrosis. And, a drug activating an enzyme that they identified (HMGCS2) is now in clinical trials to treat steatotic liver disease.
Another possible target that the new study revealed is a transcription factor called SOX4, which is normally only active during fetal development and in a small number of adult tissues (but not the liver).
Cancer progression
After the researchers identified these changes in mice, they sought to discover if something similar might be happening in human patients with liver disease. To do that, they analyzed data from liver tissue samples removed from patients at different stages of the disease. They also looked at tissue from people who had liver disease but had not yet developed cancer.
Those studies revealed a similar pattern to what the researchers had seen in mice: The expression of genes needed for normal liver function decreased over time, while genes associated with immature states went up. Additionally, the researchers found that they could accurately predict patients’ survival outcomes based on an analysis of their gene expression patterns.
“Patients who had higher expression of these pro-cell-survival genes that are turned on with high-fat diet survived for less time after tumors developed,” Tzouanas says. “And if a patient has lower expression of genes that support the functions that the liver normally performs, they also survive for less time.”
While the mice in this study developed cancer within a year or so, the researchers estimate that in humans, the process likely extends over a longer span, possibly around 20 years. That will vary between individuals depending on their diet and other risk factors such as alcohol consumption or viral infections, which can also promote liver cells’ reversion to an immature state.
The researchers now plan to investigate whether any of the changes that occur in response to a high-fat diet can be reversed by going back to a normal diet, or by taking weight-loss drugs such as GLP-1 agonists. They also hope to study whether any of the transcription factors they identified could make good targets for drugs that could help prevent diseased liver tissue from becoming cancerous.
“We now have all these new molecular targets and a better understanding of what is underlying the biology, which could give us new angles to improve outcomes for patients,” Shalek says.
The research was funded, in part, by a Fannie and John Hertz Foundation Fellowship, a National Science Foundation Graduate Research Fellowship, the National Institutes of Health, and the MIT Stem Cell Initiative through Fondation MIT.
Anything-goes “anyons” may be at the root of surprising quantum experimentsMIT physicists say these quasiparticles may explain how superconductivity and magnetism can coexist in certain materials.In the past year, two separate experiments in two different materials captured the same confounding scenario: the coexistence of superconductivity and magnetism. Scientists had assumed that these two quantum states are mutually exclusive; the presence of one should inherently destroy the other.
Now, theoretical physicists at MIT have an explanation for how this Jekyll-and-Hyde duality could emerge. In a paper appearing today in the Proceedings of the National Academy of Sciences, the team proposes that under certain conditions, a magnetic material’s electrons could splinter into fractions of themselves to form quasiparticles known as “anyons.” In certain fractions, the quasiparticles should flow together without friction, similar to how regular electrons can pair up to flow in conventional superconductors.
If the team’s scenario is correct, it would introduce an entirely new form of superconductivity — one that persists in the presence of magnetism and involves a supercurrent of exotic anyons rather than everyday electrons.
“Many more experiments are needed before one can declare victory,” says study lead author Senthil Todadri, the William and Emma Rogers Professor of Physics at MIT. “But this theory is very promising and shows that there can be new ways in which the phenomenon of superconductivity can arise.”
What’s more, if the idea of superconducting anyons can be confirmed and controlled in other materials, it could provide a new way to design stable qubits — atomic-scale “bits” that interact quantum mechanically to process information and carry out complex computations far more efficiently than conventional computer bits.
“These theoretical ideas, if they pan out, could make this dream one tiny step within reach,” Todadri says.
The study’s co-author is MIT physics graduate student Zhengyan Darius Shi.
“Anything goes”
Superconductivity and magnetism are macroscopic states that arise from the behavior of electrons. A material is a magnet when electrons in its atomic structure have roughly the same spin, or orbital motion, creating a collective pull in the form of a magnetic field within the material as a whole. A material is a superconductor when electrons passing through, in the form of voltage, can couple up in “Cooper pairs.” In this teamed-up state, electrons can glide through a material without friction, rather than randomly knocking against its atomic latticework.
For decades, it was thought that superconductivity and magnetism should not co-exist; superconductivity is a delicate state, and any magnetic field can easily sever the bonds between Cooper pairs. But earlier this year, two separate experiments proved otherwise. In the first experiment, MIT’s Long Ju and his colleagues discovered superconductivity and magnetism in rhombohedral graphene — a synthesized material made from four or five graphene layers.
“It was electrifying,” says Todadri, who recalls hearing Ju present the results at a conference. “It set the place alive. And it introduced more questions as to how this could be possible.”
Shortly after, a second team reported similar dual states in the semiconducting crystal molybdenium ditelluride (MoTe2). Interestingly, the conditions in which MoTe2 becomes superconductive happen to be the same conditions in which the material exhibits an exotic “fractional quantum anomalous Hall effect,” or FQAH — a phenomenon in which any electron passing through the material should split into fractions of itself. These fractional quasiparticles are known as “anyons.”
Anyons are entirely different from the two main types of particles that make up the universe: bosons and fermions. Bosons are the extroverted particle type, as they prefer to be together and travel in packs. The photon is the classic example of a boson. In contrast, fermions prefer to keep to themselves, and repel each other if they are too near. Electrons, protons, and neutrons are examples of fermions. Together, bosons and fermions are the two major kingdoms of particles that make up matter in the three-dimensional universe.
Anyons, in contrast, exist only in two-dimensional space. This third type of particle was first predicted in the 1980s, and its name was coined by MIT’s Frank Wilczek, who meant it as a tongue-in-cheek reference to the idea that, in terms of the particle’s behavior, “anything goes.”
A few years after anyons were first predicted, physicists such as Robert Laughlin PhD ’79, Wilczek, and others also theorized that, in the presence of magnetism, the quasiparticles should be able to superconduct.
“People knew that magnetism was usually needed to get anyons to superconduct, and they looked for magnetism in many superconducting materials,” Todadri says. “But superconductivity and magnetism typically do not occur together. So then they discarded the idea.”
But with the recent discovery that the two states can, in fact, peacefully coexist in certain materials, and in MoTe2 in particular, Todadri wondered: Could the old theory, and superconducting anyons, be at play?
Moving past frustration
Todadri and Shi set out to answer that question theoretically, building on their own recent work. In their new study, the team worked out the conditions under which superconducting anyons could emerge in a two-dimensional material. To do so, they applied equations of quantum field theory, which describes how interactions at the quantum scale, such as the level of individual anyons, can give rise to macroscopic quantum states, such as superconductivity. The exercise was not an intuitive one, since anyons are known to stubbornly resist moving, let alone superconducting, together.
“When you have anyons in the system, what happens is each anyon may try to move, but it’s frustrated by the presence of other anyons,” Todadri explains. “This frustration happens even if the anyons are extremely far away from each other. And that’s a purely quantum mechanical effect.”
Even so, the team looked for conditions in which anyons might break out of this frustration and move as one macroscopic fluid. Anyons are formed when electrons splinter into fractions of themselves under certain conditions in two-dimensional, single-atom-thin materials, such as MoTe2. Scientists had previously observed that MoTe2 exhibits the FQAH, in which electrons fractionalize, without the help of an external magnetic field.
Todadri and Shi took MoTe2 as a starting point for their theoretical work. They modeled the conditions in which the FQAH phenomenon emerged in MoTe2, and then looked to see how electrons would splinter, and what types of anyons would be produced, as they theoretically increased the number of electrons in the material.
They noted that, depending on the material’s electron density, two types of anyons can form: anyons with either 1/3 or 2/3 the charge of an electron. They then applied equations of quantum field theory to work out how either of the two anyon types would interact, and found that when the anyons are mostly of the 1/3 flavor, they are predictably frustrated, and their movement leads to ordinary metallic conduction. But when anyons are mostly of the 2/3 flavor, this particular fraction encourages the normally stodgy anyons to instead move collectively to form a superconductor, similar to how electrons can pair up and flow in conventional superconductors.
“These anyons break out of their frustration and can move without friction,” Todadri says. “The amazing thing is, this is an entirely different mechanism by which a superconductor can form, but in a way that can be described as Cooper pairs in any other system.”
Their work revealed that superconducting anyons can emerge at certain electron densities. What’s more, they found that when superconducting anyons first emerge, they do so in a totally new pattern of swirling supercurrents that spontaneously appear in random locations throughout the material. This behavior is distinct from conventional superconductors and is an exotic state that experimentalists can look for as a way to confirm the team’s theory. If their theory is correct, it would introduce a new form of superconductivity, through the quantum interactions of anyons.
“If our anyon-based explanation is what is happening in MoTe2, it opens the door to the study of a new kind of quantum matter which may be called ‘anyonic quantum matter,’” Todadri says. “This will be a new chapter in quantum physics.”
This research was supported, in part, by the National Science Foundation.
Prefrontal cortex reaches back into the brain to shape how other regions functionResearch illustrates how areas within the brain’s executive control center tailor messages in specific circuits with other brain regions to influence them with information about behavior and feelings.Vision shapes behavior and, a new study by MIT neuroscientists finds, behavior and internal states shape vision. The new research, published Nov. 25 in Neuron, finds in mice that via specific circuits, the brain’s executive control center, the prefrontal cortex, sends tailored messages to regions governing vision and motion to ensure that their work is shaped by contexts such as the mouse’s level of arousal and whether they are on the move.
“That’s the major conclusion of this paper: There are targeted projections for targeted impact,” says senior author Mriganka Sur, the Paul and Lilah Newton Professor in The Picower Institute for Learning and Memory and MIT’s Department of Brain and Cognitive Sciences.
Neuroscientists, including Sur’s next-door office neighbor at MIT, Earl K. Miller, have long suggested that the prefrontal cortex (PFC) biases the work of regions further back in the cortex. Tracing of anatomical circuits supports this idea. But in the new study, lead author and Sur Lab postdoc Sofie Ährlund-Richter sought to determine whether the PFC is broadcasting a generic signal or customizes the information it conveys for different downstream regions. She also wanted to take a fresh look at which neurons the PFC talks to, and what impact the information has on how those regions function.
Ährlund-Richter and Sur’s team uncovered several new revelations. One was that the two prefrontal areas they focused on, the orbitofrontal cortex (ORB) and the anterior cingulate area (ACA), selectively convey information about arousal and motion to the two downstream regions they studied, the primary visual cortex (VISp) and the primary motor cortex (MOp), to achieve distinct ends. For instance, the more aroused a mouse was, the more ACA prompted VISp to sharpen the focus of visual information it represented, but ORB only chimed in if arousal was very high, and then its input seemed to reduce the sharpness of visual encoding. Ährlund-Richter speculates that as arousal increases, ACA may help the visual cortex focus on resolving what might be salient in what it’s seeing, while ORB might be suppressing focus on unimportant distractors.
“These two PFC subregions are kind of balancing each other,” Ährlund-Richter says. “While one will enhance stimuli that might be more uncertain or more difficult to detect, the other one kind of dampens strong stimuli that might be irrelevant.”
In the study, Ährlund-Richter performed detailed anatomical tracings of the circuits that ACA and ORB forge with VISp and MOp to map their connections. In other experiments, mice were free to run on a wheel as they also watched both structured images or naturalistic movies at varying levels of contrast. Sometimes the mice received little air puffs that made them more aroused. Meanwhile, the neuroscientists tracked the activity of neurons in ACA, ORB, VISp, and MOp. In particular, they eavesdropped on the information flowing through the neural projections (or “axons”) that extended from the prefrontal to the posterior regions.
The anatomical tracings showed that complementary with some prior studies, the ACA and ORB each connect to many different types of cells in the target regions, not just one cell type. But they do so with distinct geographies. In VISp, for instance, ACA tapped in to layer 6, whereas ORB tapped into layer 5.
In their analysis of the transmitted information and neural activity, the scientists could discern several trends. ACA neurons conveyed more visual information than the ORB neurons and were more sensitive to changes in contrast. ACA neurons also scaled with arousal state, while ORB neurons seemed to only care if arousal crossed a high threshold. Meanwhile, when “talking” to MOp, the ACA and ORB each conveyed information about running speed, but with VISp, the regions only conveyed whether the mouse was moving or not. Finally, ACA and ORB also conveyed arousal state and a trickle of visual information to MOp.
To understand what effect this information flow had on visual function, the scientists sometimes blocked the circuits that ACA and ORB forged with VISp to see how that changed what VISp neurons did. That’s how they found that ACA and ORB affected visual encoding in specific and opposite ways, based on the mouse’s arousal level and movement.
“Our data support a model of PFC feedback that is specialized at both the level of PFC subregions and their targets, enabling each region to selectively shape target-specific cortical activity rather than modulating it globally,” the authors wrote in Neuron.
In addition to Sur and Ährlund-Richter, the paper’s other authors are Yuma Osako, Kyle R. Jenks, Emma Odom, Haoyang Huang, and Don B. Arnold.
Funding for the study came from a Wenner-Gren foundations Postdoctoral Fellowship, the National Institutes of Health, and the Freedom Together Foundation.
“Wait, we have the tech skills to build that”From robotics to apps like “NerdXing,” senior Julianna Schneider is building technologies to solve problems in her community.Students can take many possible routes through MIT’s curriculum, which can zigag through different departments, linking classes and disciplines in unexpected ways. With so many options, charting an academic path can be overwhelming, but a new tool called NerdXing is here to help.
The brainchild of senior Julianna Schneider and other students in the MIT Schwarzman College of Computing Undergraduate Advisory Group (UAG), NerdXing lets students search for a class and see all the other classes students have gone on to take in the past, including options that are off the beaten track.
“I hope that NerdXing will democratize course knowledge for everyone,” Schneider says. “I hope that for anyone who's a freshman and maybe hasn't picked their major yet, that they can go to NerdXing and start with a class that they would maybe never consider — and then discover that, ‘Oh wait, this is perfect for this really particular thing I want to study.’”
As a student double-majoring in artificial intelligence and decision-making and in mathematics, and doing research in the Biomimetic Robotics Laboratory in the Department of Mechanical Engineering, Schneider knows the benefits of interdisciplinary studies. It’s a part of the reason why she joined the UAG, which advises the MIT Schwarzman College of Computing’s leadership as it advances education and research at the intersections between computing, engineering, the arts, and more.
Through all of her activities, Schneider seeks to make people’s lives better through technology.
“This process of finding a problem in my community and then finding the right technology to solve that — that sort of approach and that framework is what guides all the things I do,” Schneider says. “And even in robotics, the things that I care about are guided by the sort of skills that I think we need to develop to be able to have meaningful applications.”
From Albania to MIT
Before she ever touched a robot or wrote code, Schneider was an accomplished young classical pianist in Albania. When she discovered her passion for robotics at age 13, she applied some of the skills she had learned while playing piano.
“I think on some fundamental level, when I was a pianist, I thought constantly about my motor dynamics as a human being, and how I execute really complex skills but do it over and over again at the top of my ability,” Schneider says. “When it came to robotics, I was building these robotic arms that also had to operate at the top of their ability every time and do really complex tasks. It felt kind of similar to me, like a fun crossover.”
Schneider joined her high school’s robotics team as a middle schooler, and she was so immediately enamored that she ended up taking over most of the coding and building of the team’s robot. She went on to win 14 regional and national awards across the three teams she led throughout middle and high school. It was clear to her that she’d found her calling.
NerdXing wasn’t Schneider’s first experience building new technology. At just 16, she built an app meant to connect English-speaking volunteers from her international school in Tirana, Albania, to local charities that only posted jobs in Albanian. By last year, the platform, called VoluntYOU, had 18 ambassadors across four continents. It has enabled volunteers to give out more than 2,000 burritos in Reno, Nevada; register hundreds of signatures to support women’s rights legislation in Albania; and help with administering Covid-19 vaccines to more than 1,200 individuals a day in Italy.
Schneider says her experience at an international school encouraged her to recognize problems and solutions all around her.
“When I enter a new community and I can immediately be like, ‘Oh wait, if we had this tool, that would be so cool and that would help all these people,’ I think that’s just a derivative of having grown up in a place where you hear about everyone’s super different life experiences,” she says.
Schneider describes NerdXing as a continuation of many of the skills she picked up while building VoluntYOU.
“They were both motivated by seeing a challenge where I thought, ‘Wait, we have the tech skills to build that. This is something that I can envision the solution to.’ And then I wanted to actually go and make that a reality,” Schneider says.
Robotics with a positive impact
At MIT, Schneider started working in the Biomimetic Robotics Laboratory of Professor Sangbae Kim, where she has now participated in three research projects, one of which she’s co-authoring a paper on. She’s part of a team that tests how robots, including the famous back-flipping mini cheetah, move, in order to see how they could complement humans in high-stakes scenarios.
Most of her work has revolved around crafting controllers, including one hybrid-learning and model-based controller that is well-suited to robots with limited onboard computing capacity. It would allow the robot to be used in regions with less access to technology.
“It’s not just doing technology for technology's sake, but because it will bridge out into the world and make a positive difference. I think legged robotics have some of the best potential to actually be a robotic partner to human beings in the scenarios that are most high-stakes,” Schneider says.
Schneider hopes to further robotic capabilities so she can find applications that will service communities around the world. One of her goals is to help create tools that allow a surgeon to operate on a patient a long distance away.
To take a break from academics, Schneider has channeled her love of the arts into MIT’s vibrant social dancing scene. This year, she’s especially excited about country line dancing events where the music comes on and students have to guess the choreography.
“I think it's a really fun way to make friends and to connect with the community,” she says.
Post-COP30, more aggressive policies needed to cap global warming at 1.5 CGlobal Change Outlook report for 2025 shows how accelerated action can reduce climate risks and improve sustainability outcomes, while highlighting potential geopolitical hurdles.The latest United Nations Climate Change Conference (COP30) concluded in November without a roadmap to phase out fossil fuels and without significant progress in strengthening national pledges to reduce climate-altering greenhouse gas emissions. In aggregate, today’s climate policies remain far too unambitious to meet the Paris Agreement’s goal of capping global warming at 1.5 degrees Celsius, setting the world on course to experience more frequent and intense storms, flooding, droughts, wildfires, and other climate impacts. A global policy regime aligned with the 1.5 C target would almost certainly reduce the severity of those impacts.
In the “2025 Global Change Outlook,” researchers at the MIT Center for Sustainability Science and Strategy (CS3) compare the consequences of these two approaches to climate policy through modeled projections of critical natural and societal systems under two scenarios. The Current Trends scenario represents the researchers’ assessment of current measures for reducing greenhouse gas (GHG) emissions; the Accelerated Actions scenario is a credible pathway to stabilizing the climate at a global mean surface temperature of 1.5 C above preindustrial levels, in which countries impose more aggressive GHG emissions-reduction targets.
By quantifying the risks posed by today’s climate policies — and the extent to which accelerated climate action aligned with the 1.5 C goal could reduce them — the “Global Change Outlook” aims to clarify what’s at stake for environments and economies around the world. Here, we summarize the report’s key findings at the global level; regional details can also be accessed in several sections and through MIT CS3’s interactive global visualization tool.
Emerging headwinds for global climate action
Projections under Current Trends show higher GHG emissions than in our previous 2023 outlook, indicating reduced action on GHG emissions mitigation in the upcoming decade. The difference, roughly equivalent to the annual emissions from Brazil or Japan, is driven by current geopolitical events.
Additional analysis in this report indicates that global GHG emissions in 2050 could be 10 percent higher than they would be under Current Trends if regional rivalries triggered by U.S. tariff policy prompt other regions to weaken their climate regulations. In that case, the world would see virtually no emissions reduction in the next 25 years.
Energy and electricity projections
Between 2025 and 2050, global energy consumption rises by 17 percent under Current Trends, with a nearly nine-fold increase in wind and solar. Under Accelerated Actions, global energy consumption declines by 16 percent, with a nearly 13-fold increase in wind and solar, driven by improvements in energy efficiency, wider use of electricity, and demand response. In both Current Trends and Accelerated Actions, global electricity consumption increases substantially (by 90 percent and 100 percent, respectively), with generation from low-carbon sources becoming a dominant source of power, though Accelerated Actions has a much larger share of renewables.
“Achieving long-term climate stabilization goals will require more ambitious policy measures that reduce fossil-fuel dependence and accelerate the energy transition toward low-carbon sources in all regions of the world. Our Accelerated Actions scenario provides a pathway for scaling up global climate ambition,” says MIT CS3 Deputy Director Sergey Paltsev, co-lead author of the report.
Greenhouse gas emissions and climate projections
Under Current Trends, global anthropogenic (human-caused) GHG emissions decline by 10 percent between 2025 and 2050, but start to rise again later in the century; under Accelerated Actions, however, they fall by 60 percent by 2050. Of the two scenarios, only the latter could put the world on track to achieve long-term climate stabilization.
Median projections for global warming by 2050, 2100, and 2150 are projected to reach 1.79, 2.74, and 3.72 degrees C (relative to the global mean surface temperature (GMST) average for the years 1850-1900) under Current Trends and 1.62, 1.56, and 1.50 C under Accelerated Actions. Median projections for global precipitation show increases from 2025 levels of 0.04, 0.11, and 0.18 millimeters per day in 2050, 2100, and 2150 under Current Trends and 0.03, 0.04, and 0.03 mm/day for those years under Accelerated Actions.
“Our projections demonstrate that aggressive cuts in GHG emissions can lead to substantial reductions in the upward trends of GMST, as well as global precipitation,” says CS3 deputy director C. Adam Schlosser, co-lead author of the outlook. “These reductions to both climate warming and acceleration of the global hydrologic cycle lower the risks of damaging impacts, particularly toward the latter half of this century.”
Implications for sustainability
The report’s modeled projections imply significantly different risk levels under the two scenarios for water availability, biodiversity, air quality, human health, economic well-being, and other sustainability indicators.
Among the key findings: Policies that align with Accelerated Actions could yield substantial co-benefits for water availability, biodiversity, air quality, and health. For example, combining Accelerated Actions-aligned climate policies with biodiversity targets, or with air-quality targets, could achieve biodiversity and air quality/health goals more efficiently and cost-effectively than a more siloed approach. The outlook’s analysis of the global economy under Current Trends suggests that decision-makers need to account for climate impacts outside their home region and the resilience of global supply chains.
Finally, CS3’s new data-visualization platform provides efficient, screening-level mapping of current and future climate, socioeconomic, and demographic-related conditions and changes — including global mapping for many of the model outputs featured in this report.
“Our comparison of outcomes under Current Trends and Accelerated Actions scenarios highlights the risks of remaining on the world’s current emissions trajectory and the benefits of pursuing a much more aggressive strategy,” says CS3 Director Noelle Selin, a co-author of the report and a professor in the Institute for Data, Systems and Society and Department of Earth, Atmospheric and Planetary Sciences at MIT. “We hope that our risk-benefit analysis will help inform decision-makers in government, industry, academia, and civil society as they confront sustainability-relevant challenges.”
A “scientific sandbox” lets researchers explore the evolution of vision systemsThe AI-powered tool could inform the design of better sensors and cameras for robots or autonomous vehicles.Why did humans evolve the eyes we have today?
While scientists can’t go back in time to study the environmental pressures that shaped the evolution of the diverse vision systems that exist in nature, a new computational framework developed by MIT researchers allows them to explore this evolution in artificial intelligence agents.
The framework they developed, in which embodied AI agents evolve eyes and learn to see over many generations, is like a “scientific sandbox” that allows researchers to recreate different evolutionary trees. The user does this by changing the structure of the world and the tasks AI agents complete, such as finding food or telling objects apart.
This allows them to study why one animal may have evolved simple, light-sensitive patches as eyes, while another has complex, camera-type eyes.
The researchers’ experiments with this framework showcase how tasks drove eye evolution in the agents. For instance, they found that navigation tasks often led to the evolution of compound eyes with many individual units, like the eyes of insects and crustaceans.
On the other hand, if agents focused on object discrimination, they were more likely to evolve camera-type eyes with irises and retinas.
This framework could enable scientists to probe “what-if” questions about vision systems that are difficult to study experimentally. It could also guide the design of novel sensors and cameras for robots, drones, and wearable devices that balance performance with real-world constraints like energy efficiency and manufacturability.
“While we can never go back and figure out every detail of how evolution took place, in this work we’ve created an environment where we can, in a sense, recreate evolution and probe the environment in all these different ways. This method of doing science opens to the door to a lot of possibilities,” says Kushagra Tiwary, a graduate student at the MIT Media Lab and co-lead author of a paper on this research.
He is joined on the paper by co-lead author and fellow graduate student Aaron Young; graduate student Tzofi Klinghoffer; former postdoc Akshat Dave, who is now an assistant professor at Stony Brook University; Tomaso Poggio, the Eugene McDermott Professor in the Department of Brain and Cognitive Sciences, an investigator in the McGovern Institute, and co-director of the Center for Brains, Minds, and Machines; co-senior authors Brian Cheung, a postdoc in the Center for Brains, Minds, and Machines and an incoming assistant professor at the University of California San Francisco; and Ramesh Raskar, associate professor of media arts and sciences and leader of the Camera Culture Group at MIT; as well as others at Rice University and Lund University. The research appears today in Science Advances.
Building a scientific sandbox
The paper began as a conversation among the researchers about discovering new vision systems that could be useful in different fields, like robotics. To test their “what-if” questions, the researchers decided to use AI to explore the many evolutionary possibilities.
“What-if questions inspired me when I was growing up to study science. With AI, we have a unique opportunity to create these embodied agents that allow us to ask the kinds of questions that would usually be impossible to answer,” Tiwary says.
To build this evolutionary sandbox, the researchers took all the elements of a camera, like the sensors, lenses, apertures, and processors, and converted them into parameters that an embodied AI agent could learn.
They used those building blocks as the starting point for an algorithmic learning mechanism an agent would use as it evolved eyes over time.
“We couldn’t simulate the entire universe atom-by-atom. It was challenging to determine which ingredients we needed, which ingredients we didn’t need, and how to allocate resources over those different elements,” Cheung says.
In their framework, this evolutionary algorithm can choose which elements to evolve based on the constraints of the environment and the task of the agent.
Each environment has a single task, such as navigation, food identification, or prey tracking, designed to mimic real visual tasks animals must overcome to survive. The agents start with a single photoreceptor that looks out at the world and an associated neural network model that processes visual information.
Then, over each agent’s lifetime, it is trained using reinforcement learning, a trial-and-error technique where the agent is rewarded for accomplishing the goal of its task. The environment also incorporates constraints, like a certain number of pixels for an agent’s visual sensors.
“These constraints drive the design process, the same way we have physical constraints in our world, like the physics of light, that have driven the design of our own eyes,” Tiwary says.
Over many generations, agents evolve different elements of vision systems that maximize rewards.
Their framework uses a genetic encoding mechanism to computationally mimic evolution, where individual genes mutate to control an agent’s development.
For instance, morphological genes capture how the agent views the environment and control eye placement; optical genes determine how the eye interacts with light and dictate the number of photoreceptors; and neural genes control the learning capacity of the agents.
Testing hypotheses
When the researchers set up experiments in this framework, they found that tasks had a major influence on the vision systems the agents evolved.
For instance, agents that were focused on navigation tasks developed eyes designed to maximize spatial awareness through low-resolution sensing, while agents tasked with detecting objects developed eyes focused more on frontal acuity, rather than peripheral vision.
Another experiment indicated that a bigger brain isn’t always better when it comes to processing visual information. Only so much visual information can go into the system at a time, based on physical constraints like the number of photoreceptors in the eyes.
“At some point a bigger brain doesn’t help the agents at all, and in nature that would be a waste of resources,” Cheung says.
In the future, the researchers want to use this simulator to explore the best vision systems for specific applications, which could help scientists develop task-specific sensors and cameras. They also want to integrate LLMs into their framework to make it easier for users to ask “what-if” questions and study additional possibilities.
“There’s a real benefit that comes from asking questions in a more imaginative way. I hope this inspires others to create larger frameworks, where instead of focusing on narrow questions that cover a specific area, they are looking to answer questions with a much wider scope,” Cheung says.
This work was supported, in part, by the Center for Brains, Minds, and Machines and the Defense Advanced Research Projects Agency (DARPA) Mathematics for the Discovery of Algorithms and Architectures (DIAL) program.
New study suggests a way to rejuvenate the immune systemStimulating the liver to produce some of the signals of the thymus can reverse age-related declines in T-cell populations and enhance response to vaccination.As people age, their immune system function declines. T cell populations become smaller and can’t react to pathogens as quickly, making people more susceptible to a variety of infections.
To try to overcome that decline, researchers at MIT and the Broad Institute have found a way to temporarily program cells in the liver to improve T-cell function. This reprogramming can compensate for the age-related decline of the thymus, where T cell maturation normally occurs.
Using mRNA to deliver three key factors that usually promote T-cell survival, the researchers were able to rejuvenate the immune systems of mice. Aged mice that received the treatment showed much larger and more diverse T cell populations in response to vaccination, and they also responded better to cancer immunotherapy treatments.
If developed for use in patients, this type of treatment could help people lead healthier lives as they age, the researchers say.
“If we can restore something essential like the immune system, hopefully we can help people stay free of disease for a longer span of their life,” says Feng Zhang, the James and Patricia Poitras Professor of Neuroscience at MIT, who has joint appointments in the departments of Brain and Cognitive Sciences and Biological Engineering.
Zhang, who is also an investigator at the McGovern Institute for Brain Research at MIT, a core institute member at the Broad Institute of MIT and Harvard, an investigator in the Howard Hughes Medical Institute, and co-director of the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics at MIT, is the senior author of the new study. Former MIT postdoc Mirco Friedrich is the lead author of the paper, which appears today in Nature.
A temporary factory
The thymus, a small organ located in front of the heart, plays a critical role in T-cell development. Within the thymus, immature T cells go through a checkpoint process that ensures a diverse repertoire of T cells. The thymus also secretes cytokines and growth factors that help T cells to survive.
However, starting in early adulthood, the thymus begins to shrink. This process, known as thymic involution, leads to a decline in the production of new T cells. By the age of approximately 75, the thymus is greatly reduced.
“As we get older, the immune system begins to decline. We wanted to think about how can we maintain this kind of immune protection for a longer period of time, and that's what led us to think about what we can do to boost immunity,” Friedrich says.
Previous work on rejuvenating the immune system has focused on delivering T cell growth factors into the bloodstream, but that can have harmful side effects. Researchers are also exploring the possibility of using transplanted stem cells to help regrow functional tissue in the thymus.
The MIT team took a different approach: They wanted to see if they could create a temporary “factory” in the body that would generate the T-cell-stimulating signals that are normally produced by the thymus.
“Our approach is more of a synthetic approach,” Zhang says. “We're engineering the body to mimic thymic factor secretion.”
For their factory location, they settled on the liver, for several reasons. First, the liver has a high capacity for producing proteins, even in old age. Also, it’s easier to deliver mRNA to the liver than to most other organs of the body. The liver was also an appealing target because all of the body’s circulating blood has to flow through it, including T cells.
To create their factory, the researchers identified three immune cues that are important for T-cell maturation. They encoded these three factors into mRNA sequences that could be delivered by lipid nanoparticles. When injected into the bloodstream, these particles accumulate in the liver and the mRNA is taken up by hepatocytes, which begin to manufacture the proteins encoded by the mRNA.
The factors that the researchers delivered are DLL1, FLT-3, and IL-7, which help immature progenitor T cells mature into fully differentiated T cells.
Immune rejuvenation
Tests in mice revealed a variety of beneficial effects. First, the researchers injected the mRNA particles into 18-month-old mice, equivalent to humans in their 50s. Because mRNA is short-lived, the researchers gave the mice multiple injections over four weeks to maintain a steady production by the liver.
After this treatment, T cell populations showed significant increases in size and function.
The researchers then tested whether the treatment could enhance the animals’ response to vaccination. They vaccinated the mice with ovalbumin, a protein found in egg whites that is commonly used to study how the immune system responds to a specific antigen. In 18-month-old mice that received the mRNA treatment before vaccination, the researchers found that the population of cytotoxic T-cells specific to ovalbumin doubled, compared to mice of the same age that did not receive the mRNA treatment.
The mRNA treatment can also boost the immune system’s response to cancer immunotherapy, the researchers found. They delivered the mRNA treatment to 18-month-old mice, who were then implanted with tumors and treated with a checkpoint inhibitor drug. This drug, which targets the protein PD-L1, is designed to help take the brakes off the immune system and stimulate T cells to attack tumor cells.
Mice that received the treatment showed much higher survival rates and longer lifespan that those that received the checkpoint inhibitor drug but not the mRNA treatment.
The researchers found that all three factors were necessary to induce this immune enhancement; none could achieve all aspects of it on their own. They now plan to study the treatment in other animal models and to identify additional signaling factors that may further enhance immune system function. They also hope to study how the treatment affects other immune cells, including B cells.
Other authors of the paper include Julie Pham, Jiakun Tian, Hongyu Chen, Jiahao Huang, Niklas Kehl, Sophia Liu, Blake Lash, Fei Chen, Xiao Wang, and Rhiannon Macrae.
The research was funded, in part, by the Howard Hughes Medical Institute, the K. Lisa Yang Brain-Body Center, part of the Yang Tan Collective at MIT, Broad Institute Programmable Therapeutics Gift Donors, the Pershing Square Foundation, J. and P. Poitras, and an EMBO Postdoctoral Fellowship.
3 Questions: Using computation to study the world’s best single-celled chemistsAssistant Professor Yunha Hwang utilizes microbial genomes to examine the language of biology. Her appointment reflects MIT’s commitment to exploring the intersection of genetics research and AI.Today, out of an estimated 1 trillion species on Earth, 99.999 percent are considered microbial — bacteria, archaea, viruses, and single-celled eukaryotes. For much of our planet’s history, microbes ruled the Earth, able to live and thrive in the most extreme of environments. Researchers have only just begun in the last few decades to contend with the diversity of microbes — it’s estimated that less than 1 percent of known genes have laboratory-validated functions. Computational approaches offer researchers the opportunity to strategically parse this truly astounding amount of information.
An environmental microbiologist and computer scientist by training, new MIT faculty member Yunha Hwang is interested in the novel biology revealed by the most diverse and prolific life form on Earth. In a shared faculty position as the Samuel A. Goldblith Career Development Professor in the Department of Biology, as well as an assistant professor at the Department of Electrical Engineering and Computer Science and the MIT Schwarzman College of Computing, Hwang is exploring the intersection of computation and biology.
Q: What drew you to research microbes in extreme environments, and what are the challenges in studying them?
A: Extreme environments are great places to look for interesting biology. I wanted to be an astronaut growing up, and the closest thing to astrobiology is examining extreme environments on Earth. And the only thing that lives in those extreme environments are microbes. During a sampling expedition that I took part in off the coast of Mexico, we discovered a colorful microbial mat about 2 kilometers underwater that flourished because the bacteria breathed sulfur instead of oxygen — but none of the microbes I was hoping to study would grow in the lab.
The biggest challenge in studying microbes is that a majority of them cannot be cultivated, which means that the only way to study their biology is through a method called metagenomics. My latest work is genomic language modeling. We’re hoping to develop a computational system so we can probe the organism as much as possible “in silico,” just using sequence data. A genomic language model is technically a large language model, except the language is DNA as opposed to human language. It’s trained in a similar way, just in biological language as opposed to English or French. If our objective is to learn the language of biology, we should leverage the diversity of microbial genomes. Even though we have a lot of data, and even as more samples become available, we’ve just scratched the surface of microbial diversity.
Q: Given how diverse microbes are and how little we understand about them, how can studying microbes in silico, using genomic language modeling, advance our understanding of the microbial genome?
A: A genome is many millions of letters. A human cannot possibly look at that and make sense of it. We can program a machine, though, to segment data into pieces that are useful. That’s sort of how bioinformatics works with a single genome. But if you’re looking at a gram of soil, which can contain thousands of unique genomes, that’s just too much data to work with — a human and a computer together are necessary in order to grapple with that data.
During my PhD and master’s degree, we were only just discovering new genomes and new lineages that were so different from anything that had been characterized or grown in the lab. These were things that we just called “microbial dark matter.” When there are a lot of uncharacterized things, that’s where machine learning can be really useful, because we’re just looking for patterns — but that’s not the end goal. What we hope to do is to map these patterns to evolutionary relationships between each genome, each microbe, and each instance of life.
Previously, we’ve been thinking about proteins as a standalone entity — that gets us to a decent degree of information because proteins are related by homology, and therefore things that are evolutionarily related might have a similar function.
What is known about microbiology is that proteins are encoded into genomes, and the context in which that protein is bounded — what regions come before and after — is evolutionarily conserved, especially if there is a functional coupling. This makes total sense because when you have three proteins that need to be expressed together because they form a unit, then you might want them located right next to each other.
What I want to do is incorporate more of that genomic context in the way that we search for and annotate proteins and understand protein function, so that we can go beyond sequence or structural similarity to add contextual information to how we understand proteins and hypothesize about their functions.
Q: How can your research be applied to harnessing the functional potential of microbes?
A: Microbes are possibly the world’s best chemists. Leveraging microbial metabolism and biochemistry will lead to more sustainable and more efficient methods for producing new materials, new therapeutics, and new types of polymers.
But it’s not just about efficiency — microbes are doing chemistry we don’t even know how to think about. Understanding how microbes work, and being able to understand their genomic makeup and their functional capacity, will also be really important as we think about how our world and climate are changing. A majority of carbon sequestration and nutrient cycling is undertaken by microbes; if we don’t understand how a given microbe is able to fix nitrogen or carbon, then we will face difficulties in modeling the nutrient fluxes of the Earth.
On the more therapeutic side, infectious diseases are a real and growing threat. Understanding how microbes behave in diverse environments relative to the rest of our microbiome is really important as we think about the future and combating microbial pathogens.
MIT community members elected to the National Academy of Inventors for 2025Professors Ahmad Bahai and Kripa Varanasi, plus seven additional MIT alumni, are honored for highly impactful inventions.The National Academy of Inventors (NAI) has named nine MIT affiliates as members of the 2025 class of NAI Fellows. They include Ahmad Bahai, an MIT professor of the practice in the Department of Electrical Engineering and Computer Science (EECS), and Kripa K. Varanasi, MIT professor in the Department of Mechanical Engineering, as well as seven additional MIT alumni. NAI fellowship is the highest professional distinction awarded solely to inventors.
“NAI Fellows are a driving force within the innovation ecosystem, and their contributions across scientific disciplines are shaping the future of our world,” says Paul R. Sanberg, fellow and president of the National Academy of Inventors. “We are thrilled to welcome this year’s class of fellows to the academy.”
This year’s 169 U.S. fellows represent 127 universities, government agencies, and research institutions across 40 U.S. states. Together, the 2025 class hold more than 5,300 U.S. patents and include recipients of the Nobel Prize, the National Medal of Science and National Medal of Technology and Innovation, as well as members of the national academies of Sciences, Engineering, and Medicine, among others.
Ahmad Bahai is professor of the practice in EECS. He was an adjunct professor at Stanford University from 2017 to 2022 and a professor in residence at the University of California at Berkeley from 2001 to 2010. Bahai has held a number of leadership roles, including director of research labs and chief technology officer of National Semiconductor, technical manager of a research group at Bell Laboratories, and founder of Algorex, a communication and acoustic integrated circuit and system company, which was acquired by National Semiconductor.
Currently, Bahai is the chief technology officer and director of corporate research of Texas Instruments and director of Kilby Labs and corporate research, and is a member of the Industrial Advisory Committee of CHIPS Act. Bahai is an IEEE Fellow and an AIMBE Fellow; he has authored over 80 publications in IEEE/IEE journals and holds more than 40 patents related to systems and circuits.
He holds an MS in electrical engineering from Imperial College London and a doctorate degree in electrical engineering from UC Berkeley.
Kripa K. Varanasi SM ’02, PhD ’04, professor of mechanical engineering, is widely recognized for his significant contributions in the field of interfacial science, thermal fluids, electrochemical systems, advanced materials, and manufacturing. A member of the MIT faculty since 2009, he leads the interdisciplinary Varanasi Research Group, which focuses on understanding physico-chemical and biological phenomena at the interfaces of matter. His group develops innovative surfaces, materials, devices, processes, and associated technologies that improve efficiency and performance across industries, including energy, decarbonization, life sciences, water, agriculture, transportation, and consumer products.
Varanasi has also scaled basic research into practical, market-ready technologies. He has co-founded six companies, including AgZen, Alsym Energy, CoFlo Medical, Dropwise, Infinite Cooling, and LiquiGlide, and his companies have been widely recognized for driving innovation across a range of industries. Throughout his career, Varanasi has been recognized for excellence in research and mentorship. Honors include the National Science Foundation CAREER Award, DARPA Young Faculty Award, SME Outstanding Young Manufacturing Engineer Award, ASME’s Bergles-Rohsenow Heat Transfer Award and Gustus L. Larson Memorial Award, Boston Business Journal’s 40 Under 40, and MIT’s Frank E. Perkins Award for Excellence in Graduate Advising.
Varanasi earned his undergraduate degree in mechanical engineering from the Indian Institute of Technology Madras, and his master’s degree and PhD from MIT. Prior to joining the faculty, he served as lead researcher and project leader at the GE Global Research Center, where he received multiple internal awards for innovation, leadership, and technical excellence. He was recently named faculty director of the Deshpande Center for Technological Innovation.
The seven additional MIT alumni who were elected to the NAI for 2025 include:
The NAI Fellows program was founded in 2012 and has grown to include 2,253 distinguished researchers and innovators, who hold over 86,000 U.S. patents and 20,000 licensed technologies. Collectively, NAI Fellows’ innovations have generated an estimated $3.8 trillion in revenue and 1.4 million jobs.
The 2025 class will be honored and presented with their medals by a senior official of the United States Patent and Trademark Office at the NAI 15th Annual Conference on June 4, 2026, in Los Angeles.
RNA editing study finds many ways for neurons to diversifyTracking how fruit fly motor neurons edit their RNA, neurobiologists cataloged hundreds of target sites and varying editing rates, finding many edits altered communication- and function-related proteins.All starting from the same DNA, neurons ultimately take on individual characteristics in the brain and body. Differences in which genes they transcribe into RNA help determine which type of neuron they become, and from there, a new MIT study shows, individual cells edit a selection of sites in those RNA transcripts, each at their own widely varying rates.
The new study surveyed the whole landscape of RNA editing in more than 200 individual cells commonly used as models of fundamental neural biology: tonic and phasic motor neurons of the fruit fly. One of the main findings is that most sites were edited at rates between the “all-or-nothing” extremes many scientists have assumed based on more limited studies in mammals, says senior author Troy Littleton, the Menicon Professor in the MIT departments of Biology and Brain and Cognitive Sciences. The resulting dataset and open-access analyses, recently published in eLife, set the table for discoveries about how RNA editing affects neural function and what enzymes implement those edits.
“We have this ‘alphabet’ now for RNA editing in these neurons,” Littleton says. “We know which genes are edited in these neurons, so we can go in and begin to ask questions as to what is that editing doing to the neuron at the most interesting targets.”
Andres Crane PhD ’24, who earned his doctorate in Littleton’s lab based on this work, is the study’s lead author.
From a genome of about 15,000 genes, Littleton and Crane’s team found, the neurons made hundreds of edits in transcripts from hundreds of genes. For example, the team documented “canonical” edits of 316 sites in 210 genes. Canonical means that the edits were made by the well-studied enzyme ADAR, which is also found in mammals, including humans. Of the 316 edits, 175 occurred in regions that encode the contents of proteins. Analysis indeed suggested 60 are likely to significantly alter amino acids. But they also found 141 more editing sites in areas that don’t code for proteins but instead affect their production, which means they could affect protein levels, rather than their contents.
The team also found many “non-canonical” edits that ADAR didn’t make. That’s important, Littleton says, because that information could aid in discovering more enzymes involved in RNA editing, potentially across species. That, in turn, could expand the possibilities for future genetic therapies.
“In the future, if we can begin to understand in flies what the enzymes are that make these other non-canonical edits, it would give us broader coverage for thinking about doing things like repairing human genomes where a mutation has broken a protein of interest,” Littleton says.
Moreover, by looking specifically at fly larvae, the team found many edits that were specific to juveniles, versus adults, suggesting potential significance during development. And because they looked at full gene transcripts of individual neurons, the team was also able to find editing targets that had not been cataloged before.
Widely varying rates
Some of the most heavily edited RNAs were from genes that make critical contributions to neural circuit communication such as neurotransmitter release, and the channels that cells form to regulate the flow of chemical ions that vary their electrical properties. The study identified 27 sites in 18 genes that were edited more than 90 percent of the time.
Yet neurons sometimes varied quite widely in whether they would edit a site, which suggests that even neurons of the same type can still take on significant degrees of individuality.
“Some neurons displayed ~100 percent editing at certain sites, while others displayed no editing for the same target,” the team wrote in eLife. “Such dramatic differences in editing rate at specific target sites is likely to contribute to the heterogeneous features observed within the same neuronal population.”
On average, any given site was edited about two-thirds of the time, and most sites were edited within a range well between all-or-nothing extremes.
“The vast majority of editing events we found were somewhere between 20 percent and 70 percent,” Littleton says. “We were seeing mixed ratios of edited and unedited transcripts within a single cell.”
Also, the more a gene was expressed, the less editing it experienced, suggesting that ADAR could only keep up so much with its editing opportunities.
Potential impacts on function
One of the key questions the data enables scientists to ask is what impact RNA edits have on the function of the cells. In a 2023 study, Littleton’s lab began to tackle this question by looking at just two edits they found in the most heavily edited gene: complexin. Complexin’s protein product restrains release of the neurotransmitter glutamate, making it a key regulator of neural circuit communication. They found that by mixing and matching edits, neurons produced up to eight different versions of the protein with significant effects on their glutamate release and synaptic electrical current. But in the new study, the team reports 13 more edits in complexin that are yet to be studied.
Littleton says he’s intrigued by another key protein, called Arc1, that the study shows experienced a non-canonical edit. Arc is a vitally important gene in “synaptic plasticity,” which is the property neurons have of adjusting the strength or presence of their “synapse” circuit connections in response to nervous system activity. Such neural nimbleness is hypothesized to be the basis of how the brain can responsively encode new information in learning and memory. Notably, Arc1 editing fails to occur in fruit flies that model Alzheimer’s disease.
Littleton says the lab is now working hard to understand how the RNA edits they’ve documented affect function in the fly motor neurons.
In addition to Crane and Littleton, the study’s other authors are Michiko Inouye and Suresh Jetti.
The National Institutes of Health, The Freedom Together Foundation, and The Picower Institute for Learning and Memory provided support for the study.
In February, President Sally Kornbluth announced the appointment of Professor Angela Koehler as faculty director of the MIT Health and Life Sciences Collaborative (MIT HEALS), with professors Iain Cheeseman and Katharina Ribbeck as associate directors. Since then, the leadership team has moved quickly to shape HEALS into an ambitious, community-wide platform for catalyzing research, translation, and education at MIT and beyond — at a moment when advances in computation, biology, and engineering are redefining what’s possible in health and the life sciences.
Rooted in MIT’s long-standing strengths in foundational discovery, convergence, and translational science, HEALS is designed to foster connections across disciplines — linking life scientists and engineers with clinicians, computational scientists, humanists, operations researchers, and designers. The initiative builds on a simple premise: that solving today’s most pressing challenges in health and life sciences requires bold thinking, deep collaboration, and sustained investment in people.
“HEALS is an opportunity to rethink how we support talent, unlock scientific ideas, and translate them into impact,” says Koehler, the Charles W. and Jennifer C. Johnson Professor in the Department of Biological Engineering and associate director of the Koch Institute for Integrative Cancer Research. “We’re building on MIT’s best traditions — convergence, experimentation, and entrepreneurship — while opening new channels for interdisciplinary research and community building.”
Koehler says her own path has been shaped by that same belief in convergence. Early collaborations between chemists, engineers, and clinicians convinced her that bringing diverse people together — what she calls “induced proximity” — can spark discoveries that wouldn’t emerge in isolation.
A culture of connection
Since stepping into their roles, the HEALS leadership team has focused on building a collaborative ecosystem that enables researchers to take on bold, interdisciplinary challenges in health and life sciences. Rather than creating a new center or department, their approach emphasizes connecting the MIT community across existing boundaries — disciplinary, institutional, and cultural.
“We want to fund science that wouldn’t otherwise happen — projects that bridge gaps, open new doors, and bring researchers together in ways that are genuinely constructive and collaborative,” says Iain Cheeseman, the Herman and Margaret Sokol Professor of Biology, core member of the Whitehead Institute for Biomedical Research, and associate head of the Department of Biology.
That vision is already taking shape through initiatives like the MIT HEALS seed grants, which support bold new collaborations between MIT principal investigators; the MIT–Mass General Brigham Seed Program, which supports joint research between investigators at MIT and clinicians at MGB; and the Biswas Postdoctoral Fellowship Program, designed to bring top early-career researchers to MIT to pursue cross-cutting work in areas such as computational biology, biomedical engineering, and therapeutic discovery.
The leadership team sees these programs not as endpoints, but as starting points for a broader shift in how MIT supports health and life sciences research.
For Cheeseman, whose lab is working to build on their fundamental discoveries on how human cells function to impact cancer treatment and rare human disease, HEALS represents a way to connect deep biological discovery with the translational insights emerging from MIT’s engineering and clinical communities. He puts it simply: “to me, this is deeply personal, recognizing the limitations that existed for my own work and hoping to unlock these possibilities for researchers across MIT.”
Training the next generation
Ribbeck, a biologist focused on mucus and microbial ecosystems, sees HEALS as a way to train scientists who are as comfortable discussing patient needs as they are conducting experiments at the bench. She emphasizes that preparing the next generation of researchers means equipping them with fluency in areas like clinical language, regulatory processes, and translational pathways — skills many current investigators lack. “Many PIs, although they do clinical research, may not have dedicated support for taking their findings to the next level — how to design a clinical trial, or what regulatory questions need to be addressed — reflecting a broader structural gap in translational training” she says.
A central focus for the HEALS leadership team is building new models for training researchers to move fluidly between disciplines, institutions, and methods of translation. Ribbeck and Koehler stress the importance of giving students and postdocs hands-on opportunities that connect research with real-world experience. That means expanding programs like the Undergraduate Research Opportunities Program (UROP), the Advanced UROP (SuperUROP), and the MIT New Engineering Education Transformation, and creating new ways for trainees to engage with industry, clinical partners, and entrepreneurship. They are learning at the intersection of engineering, biology, and medicine — and increasingly across disciplines that span economics, design, the social sciences, and the humanities, where students are already creating collaborations that do not yet have formal pathways.
Koehler, drawing from her leadership at the Deshpande Center for Technological Innovation and the Koch Institute, notes that “if we invest in the people, the solutions to problems will naturally arise.” She envisions HEALS as a platform for induced proximity — not just of disciplines, but of people at different career stages, working together in environments that support both risk-taking and mentorship.
“For me, HEALS builds on what I’ve seen work at MIT — bringing people with different skill sets together to tackle challenges in life sciences and medicine,” she says. “It’s about putting community first and empowering the next generation to lead across disciplines.”
A platform for impact
Looking ahead, the HEALS leadership team envisions the collaborative as a durable platform for advancing health and life sciences at MIT. That includes launching flagship events, supporting high-risk, high-reward ideas, and developing partnerships across the biomedical ecosystem in Boston and beyond. As they see it, MIT is uniquely positioned for this moment: More than three-quarters of the Institute’s faculty work in areas that touch health and life sciences, giving HEALS a rare opportunity to bring that breadth together in new configurations and amplify impact across disciplines.
From the earliest conversations, the leaders have heard a clear message from faculty across MIT — a strong appetite for deeper connection, for working across boundaries, and for tackling urgent societal challenges together. That shared sense of momentum is what gave rise to HEALS, and it now drives the team’s focus on building the structures that can support a community that wants to collaborate at scale.
“Faculty across MIT are already reaching out — looking to connect with clinics, collaborate on new challenges, and co-create solutions,” says Koehler. “That hunger for connection is why HEALS was created. Now we have to build the structures that support it.”
Cheeseman adds that this collaborative model is what makes MIT uniquely positioned to lead. “When you bring together people from different fields who are motivated by impact,” he says, “you create the conditions for discoveries that none of us could achieve alone.”
Enabling small language models to solve complex reasoning tasksThe “self-steering” DisCIPL system directs small models to work together on tasks with constraints, like itinerary planning and budgeting.As language models (LMs) improve at tasks like image generation, trivia questions, and simple math, you might think that human-like reasoning is around the corner. In reality, they still trail us by a wide margin on complex tasks. Try playing Sudoku with one, for instance, where you fill in numbers one through nine in such a way that each appears only once across the columns, rows, and sections of a nine-by-nine grid. Your AI opponent will either fail to fill in boxes on its own or do so inefficiently, although it can verify if you’ve filled yours out correctly.
Whether an LM is trying to solve advanced puzzles, design molecules, or write math proofs, the system struggles to answer open-ended requests that have strict rules to follow. The model is better at telling users how to approach these challenges than attempting them itself. Moreover, hands-on problem-solving requires LMs to consider a wide range of options while following constraints. Small LMs can’t do this reliably on their own; large language models (LLMs) sometimes can, particularly if they’re optimized for reasoning tasks, but they take a while to respond, and they use a lot of computing power.
This predicament led researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) to develop a collaborative approach where an LLM does the planning, then divvies up the legwork of that strategy among smaller ones. Their method helps small LMs provide more accurate responses than leading LLMs like OpenAI’s GPT-4o, and approach the precision of top reasoning systems such as o1, while being more efficient than both. Their framework, called “Distributional Constraints by Inference Programming with Language Models” (or “DisCIPL”), has a large model steer smaller “follower” models toward precise responses when writing things like text blurbs, grocery lists with budgets, and travel itineraries.
The inner workings of DisCIPL are much like contracting a company for a particular job. You provide a “boss” model with a request, and it carefully considers how to go about doing that project. Then, the LLM relays these instructions and guidelines in a clear way to smaller models. It corrects follower LMs’ outputs where needed — for example, replacing one model’s phrasing that doesn’t fit in a poem with a better option from another.
The LLM communicates with its followers using a language they all understand — that is, a programming language for controlling LMs called “LLaMPPL.” Developed by MIT's Probabilistic Computing Project in 2023, this program allows users to encode specific rules that steer a model toward a desired result. For example, LLaMPPL can be used to produce error-free code by incorporating the rules of a particular language within its instructions. Directions like “write eight lines of poetry where each line has exactly eight words” are encoded in LLaMPPL, queuing smaller models to contribute to different parts of the answer.
MIT PhD student Gabriel Grand, who is the lead author on a paper presenting this work, says that DisCIPL allows LMs to guide each other toward the best responses, which improves their overall efficiency. “We’re working toward improving LMs’ inference efficiency, particularly on the many modern applications of these models that involve generating outputs subject to constraints,” adds Grand, who is also a CSAIL researcher. “Language models are consuming more energy as people use them more, which means we need models that can provide accurate answers while using minimal computing power.”
“It's really exciting to see new alternatives to standard language model inference,” says University of California at Berkeley Assistant Professor Alane Suhr, who wasn’t involved in the research. “This work invites new approaches to language modeling and LLMs that significantly reduce inference latency via parallelization, require significantly fewer parameters than current LLMs, and even improve task performance over standard serialized inference. The work also presents opportunities to explore transparency, interpretability, and controllability of model outputs, which is still a huge open problem in the deployment of these technologies.”
An underdog story
You may think that larger-scale LMs are “better” at complex prompts than smaller ones when it comes to accuracy and efficiency. DisCIPL suggests a surprising counterpoint for these tasks: If you can combine the strengths of smaller models instead, you may just see an efficiency bump with similar results.
The researchers note that, in theory, you can plug in dozens of LMs to work together in the DisCIPL framework, regardless of size. In writing and reasoning experiments, they went with GPT-4o as their “planner LM,” which is one of the models that helps ChatGPT generate responses. It brainstormed a plan for several “Llama-3.2-1B” models (smaller systems developed by Meta), in which those LMs filled in each word (or token) of the response.
This collective approach competed against three comparable ones: a follower-only baseline powered by Llama-3.2-1B, GPT-4o working on its own, and the industry-leading o1 reasoning system that helps ChatGPT figure out more complex questions, such as coding requests and math problems.
DisCIPL first presented an ability to write sentences and paragraphs that follow explicit rules. The models were given very specific prompts — for example, writing a sentence that has exactly 18 words, where the fourth word must be “Glasgow,” the eighth should be “in”, and the 11th must be “and.” The system was remarkably adept at handling this request, crafting coherent outputs while achieving accuracy and coherence similar to o1.
Faster, cheaper, better
This experiment also revealed that key components of DisCIPL were much cheaper than state-of-the-art systems. For instance, whereas existing reasoning models like OpenAI’s o1 perform reasoning in text, DisCIPL “reasons” by writing Python code, which is more compact. In practice, the researchers found that DisCIPL led to 40.1 percent shorter reasoning and 80.2 percent cost savings over o1.
DisCIPL’s efficiency gains stem partly from using small Llama models as followers, which are 1,000 to 10,000 times cheaper per token than comparable reasoning models. This means that DisCIPL is more “scalable” — the researchers were able to run dozens of Llama models in parallel for a fraction of the cost.
Those weren’t the only surprising findings, according to CSAIL researchers. Their system also performed well against o1 on real-world tasks, such as making ingredient lists, planning out a travel itinerary, and writing grant proposals with word limits. Meanwhile, GPT-4o struggled with these requests, and with writing tests, it often couldn’t place keywords in the correct parts of sentences. The follower-only baseline essentially finished in last place across the board, as it had difficulties with following instructions.
“Over the last several years, we’ve seen some impressive results from approaches that use language models to ‘auto-formalize’ problems in math and robotics by representing them with code,” says senior author Jacob Andreas, who is an MIT electrical engineering and computer science associate professor and CSAIL principal investigator. “What I find most exciting about this paper is the fact that we can now use LMs to auto-formalize text generation itself, enabling the same kinds of efficiency gains and guarantees that we’ve seen in these other domains.”
In the future, the researchers plan on expanding this framework into a more fully-recursive approach, where you can use the same model as both the leader and followers. Grand adds that DisCIPL could be extended to mathematical reasoning tasks, where answers are harder to verify. They also intend to test the system on its ability to meet users’ fuzzy preferences, as opposed to following hard constraints, which can’t be outlined in code so explicitly. Thinking even bigger, the team hopes to use the largest possible models available, although they note that such experiments are computationally expensive.
Grand and Andreas wrote the paper alongside CSAIL principal investigator and MIT Professor Joshua Tenenbaum, as well as MIT Department of Brain and Cognitive Sciences Principal Research Scientist Vikash Mansinghka and Yale University Assistant Professor Alex Lew SM ’20 PhD ’25. CSAIL researchers presented the work at the Conference on Language Modeling in October and IVADO’s “Deploying Autonomous Agents: Lessons, Risks and Real-World Impact” workshop in November.
Their work was supported, in part, by the MIT Quest for Intelligence, Siegel Family Foundation, the MIT-IBM Watson AI Lab, a Sloan Research Fellowship, Intel, the Air Force Office of Scientific Research, the Defense Advanced Research Projects Agency, the Office of Naval Research, and the National Science Foundation.
The School of Science welcomed 11 new faculty members in 2024.
Shaoyun Bai researches symplectic topology, the study of even-dimensional spaces whose properties are reflected by two-dimensional surfaces inside them. He is interested in this area’s interaction with other fields, including algebraic geometry, algebraic topology, geometric topology, and dynamics. He has been developing new tool kits for counting problems from moduli spaces, which have been applied to classical questions, including the Arnold conjecture, periodic points of Hamiltonian maps, higher-rank Casson invariants, enumeration of embedded curves, and topology of symplectic fibrations.
Bai completed his undergraduate studies at Tsinghua University in 2017 and earned his PhD in mathematics from Princeton University in 2022, advised by John Pardon. Bai then held visiting positions at MSRI (now known as Simons Laufer Mathematical Sciences Institute) as a McDuff Postdoctoral Fellow and at the Simons Center for Geometry and Physics, and he was a Ritt Assistant Professor at Columbia University. He joined the MIT Department of Mathematics as an assistant professor in 2024.
Abigail Bodner investigates turbulence in the upper ocean using remote sensing measurements, in-situ ocean observations numerical simulations, climate models, and machine learning. Her research explores how the small-scale physics of turbulence near the ocean surface impacts the large-scale climate.
Bodner earned a BS and MS from Tel Aviv University studying mathematics and geophysics, atmospheric and planetary sciences. She then went on to Brown University, earning an MS in applied mathematics before completing her PhD studies in 2021 in Earth, environmental, and planetary science. Prior to coming to MIT, Bodner was a Simons Society Junior Fellow at New York University. Abigail Bodner is an assistant professor in the Department of Earth, Atmospheric, and Planetary Sciences, holding an MIT Schwarzman College of Computing shared position with the Department of Electrical Engineering and Computer Science.
Jacopo Borga is interested in probability theory and its connections to combinatorics, and in mathematical physics. He studies various random combinatorial structures — mathematical objects such as graphs or permutations — and their patterns and behavior at a large scale. This research includes random permutons, meanders, multidimensional constrained Brownian motions, Schramm-Loewner evolutions, and Liouville quantum gravity.
Borga earned bachelor’s and master’s degrees in mathematics from the Università degli Studi di Padova, and a master’s degree in mathematics from Université Sorbonne Paris Cité (USPC), then proceeded to complete a PhD in mathematics at Unstitut für Mathematik at the Universität Zürich. Borga was an assistant professor at Stanford University before joining MIT as an assistant professor of mathematics in 2024.
Linlin Fan aims to decipher the neural codes underlying learning and memory and to identify the physical basis of learning and memory. Her research focus is on the learning rules of brain circuits — what kinds of activity trigger the encoding and storing of information — how these learning rulers are implemented, and how memories can be inferred from mapping neural functional connectivity patterns. To answer these questions, Fan’s group leverages high-precision, all-optical technologies to map and control the electrical charges of neurons within the brain.
Fan earned her PhD at Harvard University after undergraduate studies at Peking University in China. She joined the MIT Department of Brain and Cognitive Sciences as the Samuel A. Goldblith Career Development Professor of Applied Biology, and the Picower Institute for Learning and Memory as an investigator in January 2024. Previously, Fan worked as a postdoc at Stanford University.
Whitney Henry investigates ferroptosis, a type of cell death dependent on iron, to uncover how oxidative stress, metabolism, and immune signaling intersect to shape cell fate decisions. Her research has defined key lipid metabolic and iron homeostatic programs that regulate ferroptosis susceptibility. By uncovering the molecular factors influencing ferroptosis susceptibility, investigating its effects on the tumor microenvironment, and developing innovative methods to manipulate ferroptosis resistance in living organisms, Henry’s lab aims to gain a comprehensive understanding of the therapeutic potential of ferroptosis, especially to target highly metastatic, therapy-resistant cancer cells.
Henry received her bachelor's degree in biology with a minor in chemistry from Grambling State University and her PhD from Harvard University. Following her doctoral studies, she worked at the Whitehead Institute for Biomedical Research and was supported by fellowships from the Jane Coffin Childs Memorial Fund for Medical Research and the Ludwig Center at MIT. Henry joined the MIT faculty in 2024 as an assistant professor in the Department of Biology and a member of the Koch Institute for Integrative Cancer Research, and was recently named the Robert A. Swanson (1969) Career Development Professor of Life Sciences and a HHMI Freeman Hrabowski Scholar.
Gian Michele Innocenti is an experimental physicist who probes new regimes of quantum chromodynamics (QCD) through collisions of ultra relativistic heavy ions at the Large Hadron Collider. He has developed advanced analysis techniques and data-acquisition strategies that enable novel measurements of open heavy-flavor and jet production in hadronic and ultraperipheral heavy-ion collisions, shedding light on the properties of high-temperature QCD matter and parton dynamics in Lorentz-contracted nuclei. He leads the MIT Pixel𝜑 program, which exploits CMOS MAPS technology to build a high-precision tracking detector for the ePIC experiment at the Electron–Ion Collider.
Innocenti received his PhD in particle and nuclear physics at the University of Turin in Italy in early 2014. He then joined the MIT heavy-ion group in the Laboratory of Nuclear Science in 2014 as a postdoc, followed by a staff research physicist position at CERN in 2018. Innocenti joined the MIT Department of Physics as an assistant professor in January 2024.
Mathematician Christoph Kehle's research interests lie at the intersection of analysis, geometry, and partial differential equations. In particular, he focuses on the Einstein field equations of general relativity and our current understanding of gravitation, which describe how matter and energy shape spacetime. His work addresses the Strong Cosmic Censorship conjecture, singularities in black hole interiors, and the dynamics of extremal black holes.
Prior to joining MIT, Kehle was a junior fellow at ETH Zürich and a member at the Institute for Advanced Study in Princeton. He earned his bachelor’s and master’s degrees at Ludwig Maximilian University and Technical University of Munich, and his PhD in 2020 from the University of Cambridge. Kehle joined the Department of Mathematics as an assistant professor in July 2024.
Aleksandr Logunov is a mathematician specializing in harmonic analysis and geometric analysis. He has developed novel techniques for studying the zeros of solutions to partial differential equations and has resolved several long-standing problems, including Yau’s conjecture, Nadirashvili’s conjecture, and Landis’ conjectures.
Logunov earned his PhD in 2015 from St. Petersburg State University. He then spent two years as a postdoc at Tel Aviv University, followed by a year as a member of the Institute for Advanced Study in Princeton. In 2018, he joined Princeton University as an assistant professor. In 2020, he spent a semester at Tel Aviv University as an IAS Outstanding Fellow, and in 2021, he was appointed full professor at the University of Geneva. Logunov joined MIT as a full professor in the Department of Mathematics in January 2024.
Lyle Nelson is a sedimentary geologist studying the co-evolution of life and surface environments across pivotal transitions in Earth history, especially during significant ecological change — such as extinction events and the emergence of new clades — and during major shifts in ocean chemistry and climate. Studying sedimentary rocks that were tectonically uplifted and are now exposed in mountain belts around the world, Nelson’s group aims to answer questions such as how the reorganization of continents influenced the carbon cycle and climate, the causes and effects of ancient ice ages, and what factors drove the evolution of early life forms and the rapid diversification of animals during the Cambrian period.
Nelson earned a bachelor’s degree in earth and planetary sciences from Harvard University in 2015 and then worked as an exploration geologist before completing his PhD at Johns Hopkins University in 2022. Prior to coming to MIT, he was an assistant professor in the Department of Earth Sciences at Carleton University in Ontario, Canada. Nelson joined the EAPS faculty in 2024.
Protein evolution is the process by which proteins change over time through mechanisms such as mutation or natural selection. Biologist Sergey Ovchinnikov uses phylogenetic inference, protein structure prediction/determination, protein design, deep learning, energy-based models, and differentiable programming to tackle evolutionary questions at environmental, organismal, genomic, structural, and molecular scales, with the aim of developing a unified model of protein evolution.
Ovchinnikov received his BS in micro/molecular biology from Portland State University in 2010 and his PhD in molecular and cellular biology from the University of Washington in 2017. He was next a John Harvard Distinguished Science Fellow at Harvard University until 2023. Ovchinnikov joined MIT as an assistant professor of biology in January 2024.
Shu-Heng Shao explores the structural aspects of quantum field theories and lattice systems. Recently, his research has centered on generalized symmetries and anomalies, with a particular focus on a novel type of symmetry without an inverse, referred to as non-invertible symmetries. These new symmetries have been identified in various quantum systems, including the Ising model, Yang-Mills theories, lattice gauge theories, and the Standard Model. They lead to new constraints on renormalization group flows, new conservation laws, and new organizing principles in classifying phases of quantum matter.
Shao obtained his BS in physics from National Taiwan University in 2010, and his PhD in physics from Harvard University in 2016. He was then a five-year long-term member at the Institute for Advanced Study in Princeton before he moved to the Yang Institute for Theoretical Physics at Stony Brook University as an assistant professor in 2021. In 2024, he joined the MIT faculty as an assistant professor of physics.
MIT study shows how vision can be rebooted in adults with amblyopiaTemporarily anesthetizing the retina briefly reverts the activity of the visual system to that observed in early development and enables growth of responses to the amblyopic (“lazy”) eye.In the vision disorder amblyopia (commonly known as “lazy eye”), impaired vision in one eye during development causes neural connections in the brain’s visual system to shift toward supporting the other eye, leaving the amblyopic eye less capable even after the original impairment is corrected. Current interventions are only effective during infancy and early childhood, while the neural connections are still being formed.
Now a study in mice by neuroscientists in The Picower Institute for Learning and Memory at MIT shows that if the retina of the amblyopic eye is temporarily and reversibly anesthetized just for a couple of days, the brain’s visual response to the eye can be restored, even in adulthood.
The open-access findings, published Nov. 25 in Cell Reports, may improve the clinical potential of the idea of temporarily anesthetizing a retina to restore the strength of the amblyopic eye’s neural connections.
In 2021, the lab of Picower Professor Mark Bear and collaborators showed that anesthetizing the non-amblyopic eye could improve vision in the amblyopic one — an approach analogous in that way to the treatment used in childhood of patching the unimpaired eye. Those 2021 findings have now been replicated in adults of multiple species. But the new evidence on how inactivation works suggests that the proposed treatment also could be effective when applied directly to the amblyopic eye, Bear says, though a key next step will be to again show that it works in additional species and, ultimately, people.
“If it does, it’s a pretty substantial step forward, because it would be reassuring to know that vision in the good eye would not have to be interrupted by treatment,” says Bear, a faculty member in MIT’s Department of Brain and Cognitive Sciences. “The amblyopic eye, which is not doing much, could be inactivated and ‘brought back to life’ instead. Still, I think that especially with any invasive treatment, it’s extremely important to confirm the results in higher species with visual systems closer to our own.”
Madison Echavarri-Leet PhD ’25, whose doctoral thesis included this research, is the lead author of the study, which also demonstrates the underlying process in the brain that makes the potential treatment work.
A beneficial burst
Bear’s lab has been studying the science underlying amblyopia for decades, for instance by working to understand the molecular mechanisms that enable neural circuits to change their connections in response to visual experience or deprivation. The research has produced ideas about how to address amblyopia in adulthood. In a 2016 study with collaborators at Dalhousie University, they showed that temporarily anesthetizing both retinas could restore vision loss in amblyopia. Then, five years later, they published the study showing that anesthetizing just the non-amblyopic eye produced visual recovery for the amblyopic eye.
Throughout that time, the lab weighed multiple hypotheses to explain how retinal inactivation works its magic. Lingering in the lab’s archive of results, Bear says, was an unexplored finding in the lateral geniculate nucleus (LGN) that relays information from the eyes to the visual cortex, where vision is processed: back in 2008, they had found that blocking inputs from a retina to neurons in the LGN caused those neurons to fire synchronous “bursts” of electrical signals to downstream neurons in the visual cortex. Similar patterns of activity occur in the visual system before birth and guide early synaptic development.
The new study tested whether those bursts might have a role in the potential amblyopia treatments the lab was reporting. To get started, Leet and Bear’s team used a single injection of tetrodotoxin (TTX) to anesthetize retinas in the lab animals. They found that the bursting occurred not only in LGN neurons that received input from the anesthetized eye, but also in LGN neurons that received input from the unaffected eye.
From there, they showed that the bursting response depended on a particular “T-type” channel for calcium in the LGN neurons. This was important, because knowing this gave the scientists a way to turn it off. Once they gained that ability, then they could test whether doing so prevented TTX from having a therapeutic effect in mice with amblyopia.
Sure enough, when the researchers genetically knocked out the channels and disrupted the bursting, they found that anesthetizing the non-amblyopic eye could no longer help amblyopic mice. That showed the bursting is necessary for the treatment to work.
Aiding amblyopia
Given their finding that bursting occurs when either retina is anesthetized, the scientists hypothesized it might be enough to just do it in the amblyopic eye. To test this, they ran an experiment in which some mice modeling amblyopia received TTX in their amblyopic eye and some did not. The injection took the retina offline for two days. After a week, the scientists then measured activity in neurons in the visual cortex to calculate a ratio of input from each eye. They found that the ratio was much more even in mice that received the treatment versus those left untreated, indicating that after the amblyopic eye was anesthetized, its input in the brain rose to be at parity with input from the non-amblyopic one.
Further testing is needed, Bear notes, but the team wrote in the study that the results were encouraging.
“We are cautiously optimistic that these findings may lead to a new treatment approach for human amblyopia, particularly given the discovery that silencing the amblyopic eye is effective,” the scientists wrote.
In addition to Leet and Bear, the paper’s authors are Tushar Chauhan, Teresa Cramer, and Ming-fai Fong.
The National Institutes of Health, the Swiss National Science Foundation, the Severin Hacker Vision Research Fund, and the Freedom Together Foundation supported the study.
When it comes to language, context mattersMIT researchers identified three cognitive skills that we use to infer what someone really means.In everyday conversation, it’s critical to understand not just the words that are spoken, but the context in which they are said. If it’s pouring rain and someone remarks on the “lovely weather,” you won’t understand their meaning unless you realize that they’re being sarcastic.
Making inferences about what someone really means when it doesn’t match the literal meaning of their words is a skill known as pragmatic language ability. This includes not only interpreting sarcasm but also understanding metaphors and white lies, among many other conversational subtleties.
“Pragmatics is trying to reason about why somebody might say something, and what is the message they’re trying to convey given that they put it in this particular way,” says Evelina Fedorenko, an MIT associate professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research.
New research from Fedorenko and her colleagues has revealed that these abilities can be grouped together based on what types of inferences they require. In a study of 800 people, the researchers identified three clusters of pragmatic skills that are based on the same kinds of inferences and may have similar underlying neural processes.
One of these clusters includes inferences that are based on our knowledge of social conventions and rules. Another depends on knowledge of how the physical world works, while the last requires the ability to interpret differences in tone, which can indicate emphasis or emotion.
Fedorenko and Edward Gibson, an MIT professor of brain and cognitive sciences, are the senior authors of the study, which appears today in the Proceedings of the National Academy of Sciences. The paper’s lead authors are Sammy Floyd, a former MIT postdoc who is now an assistant professor of psychology at Sarah Lawrence College, and Olessia Jouravlev, a former MIT postdoc who is now an associate professor of cognitive science at Carleton University.
The importance of context
Much past research on how people understand language has focused on processing the literal meanings of words and how they fit together. To really understand what someone is saying, however, we need to interpret those meanings based on context.
“Language is about getting meanings across, and that often requires taking into account many different kinds of information — such as the social context, the visual context, or the present topic of the conversation,” Fedorenko says.
As one example, the phrase “people are leaving” can mean different things depending on the context, Gibson points out. If it’s late at night and someone asks you how a party is going, you may say “people are leaving,” to convey that the party is ending and everyone’s going home.
“However, if it’s early, and I say ‘people are leaving,’ then the implication is that the party isn’t very good,” Gibson says. “When you say a sentence, there’s a literal meaning to it, but how you interpret that literal meaning depends on the context.”
About 10 years ago, with support from the Simons Center for the Social Brain at MIT, Fedorenko and Gibson decided to explore whether it might be possible to precisely distinguish the types of processing that go into pragmatic language skills.
One way that neuroscientists can approach a question like this is to use functional magnetic resonance imaging (fMRI) to scan the brains of participants as they perform different tasks. This allows them to link brain activity in different locations to different functions. However, the tasks that the researchers designed for this study didn’t easily lend themselves to being performed in a scanner, so they took an alternative approach.
This approach, known as “individual differences,” involves studying a large number of people as they perform a variety of tasks. This technique allows researchers to determine whether the same underlying brain processes may be responsible for performance on different tasks.
To do this, the researchers evaluate whether each participant tends to perform similarly on certain groups of tasks. For example, some people might perform well on tasks that require an understanding of social conventions, such as interpreting indirect requests and irony. The same people might do only so-so on tasks that require understanding how the physical world works, and poorly on tasks that require distinguishing meanings based on changes in intonation — the melody of speech. This would suggest that separate brain processes are being recruited for each set of tasks.
The first phase of the study was led by Jouravlev, who assembled existing tasks that require pragmatic skills and created many more, for a total of 20. These included tasks that require people to understand humor and sarcasm, as well as tasks where changes in intonation can affect the meaning of a sentence. For example, someone who says “I wanted blue and black socks,” with emphasis on the word “black,” is implying that the black socks were forgotten.
“People really find ways to communicate creatively and indirectly and non-literally, and this battery of tasks captures that,” Floyd says.
Components of pragmatic ability
The researchers recruited study participants from an online crowdsourcing platform to perform the tasks, which took about eight hours to complete. From this first set of 400 participants, the researchers found that the tasks formed three clusters, related to social context, general knowledge of the world, and intonation. To test the robustness of the findings, the researchers continued the study with another set of 400 participants, with this second half run by Floyd after Jouravlev had left MIT.
With the second set of participants, the researchers found that tasks clustered into the same three groups. They also confirmed that differences in general intelligence, or in auditory processing ability (which is important for the processing of intonation), did not affect the outcomes that they observed.
In future work, the researchers hope to use brain imaging to explore whether the pragmatic components they identified are correlated with activity in different brain regions. Previous work has found that brain imaging often mirrors the distinctions identified in individual difference studies, but can also help link the relevant abilities to specific neural systems, such as the core language system or the theory of mind system.
This set of tests could also be used to study people with autism, who sometimes have difficulty understanding certain social cues. Such studies could determine more precisely the nature and extent of these difficulties. Another possibility could be studying people who were raised in different cultures, which may have different norms around speaking directly or indirectly.
“In Russian, which happens to be my native language, people are more direct. So perhaps there might be some differences in how native speakers of Russian process indirect requests compared to speakers of English,” Jouravlev says.
The research was funded by the Simons Center for the Social Brain at MIT, the National Institutes of Health, and the National Science Foundation.
Too sick to socialize: How the brain and immune system promote staying in bedMIT researchers discover how an immune system molecule triggers neurons to shut down social behavior in mice modeling infection.“I just can’t make it tonight. You have fun without me.” Across much of the animal kingdom, when infection strikes, social contact shuts down. A new study details how the immune and central nervous systems implement this sickness behavior.
It makes perfect sense that when we’re battling an infection, we lose our desire to be around others. That protects others from getting sick and lets us get much-needed rest. What hasn’t been as clear is how this behavior change happens.
In new research published Nov. 25 in Cell, scientists at MIT’s Picower Institute for Learning and Memory and collaborators used multiple methods to demonstrate causally that when the immune system cytokine interleukin-1 beta (IL-1β) reaches the IL-1 receptor 1 (IL-1R1) on neurons in a brain region called the dorsal raphe nucleus, that activates connections with the intermediate lateral septum to shut down social behavior.
“Our findings show that social isolation following immune challenge is self-imposed and driven by an active neural process, rather than a secondary consequence of physiological symptoms of sickness, such as lethargy,” says study co-senior author Gloria Choi, associate professor in MIT’s Department of Brain and Cognitive Sciences and a member of the Picower Institute.
Jun Huh, Harvard Medical School associate professor of immunology, is the paper’s co-senior author. The lead author is Liu Yang, a research scientist in Choi’s lab.
A molecule and its receptor
Choi and Huh’s long collaboration has identified other cytokines that affect social behavior by latching on to their receptors in the brain, so in this study their team hypothesized that the same kind of dynamic might cause social withdrawal during infection. But which cytokine? And what brain circuits might be affected?
To get started, Yang and her colleagues injected 21 different cytokines into the brains of mice, one by one, to see if any triggered social withdrawal the same way that giving mice LPS (a standard way of simulating infection) did. Only IL-1β injection fully recapitulated the same social withdrawal behavior as LPS. That said, IL-1β also made the mice more sluggish.
IL-1β affects cells when it hooks up with the IL-1R1, so the team next went looking across the brain for where the receptor is expressed. They identified several regions and examined individual neurons in each. The dorsal raphe nucleus (DRN) stood out among regions, both because it is known to modulate social behavior and because it is situated next to the cerebral aqueduct, which would give it plenty of exposure to incoming cytokines in cerebrospinal fluid. The experiments identified populations of DRN neurons that express IL-1R1, including many involved in making the crucial neuromodulatory chemical serotonin.
From there, Yang and the team demonstrated that IL-1β activates those neurons, and that activating the neurons promotes social withdrawal. Moreover, they showed that inhibiting that neural activity prevented social withdrawal in mice treated with IL-1β, and they showed that shutting down the IL-1R1 in the DRN neurons also prevented social withdrawal behavior after IL-1β injection or LPS exposure. Notably, these experiments did not change the lethargy that followed IL-1β or LPS, helping to demonstrate that social withdrawal and lethargy occur through different means.
“Our findings implicate IL-1β as a primary effector driving social withdrawal during systemic immune activation,” the researchers wrote in Cell.
Tracing the circuit
With the DRN identified as the site where neurons receiving IL-1β drove social withdrawal, the next question was what circuit they effected that behavior change through. The team traced where the neurons make their circuit projections and found several regions that have a known role in social behavior. Using optogenetics, a technology that engineers cells to become controllable with flashes of light, the scientists were able to activate the DRN neurons’ connections with each downstream region. Only activating the DRN’s connections with the intermediate lateral septum caused the social withdrawal behaviors seen with IL-1β injection or LPS exposure.
In a final test, they replicated their results by exposing some mice to salmonella.
“Collectively, these results reveal a role for IL-1R1-expressing DRN neurons in mediating social withdrawal in response to IL-1β during systemic immune challenge,” the researchers wrote.
Although the study revealed the cytokine, neurons, and circuit responsible for social withdrawal in mice in detail and with demonstrations of causality, the results still inspire new questions. One is whether IL-1R1 neurons affect other sickness behaviors. Another is whether serotonin has a role in social withdrawal or other sickness behaviors.
In addition to Yang, Choi, and Huh, the paper’s other authors are Matias Andina, Mario Witkowski, Hunter King, and Ian Wickersham.
Funding for the research came from the National Institute of Mental Health, the National Research Foundation of Korea, the Denis A. and Eugene W. Chinery Fund for Neurodevelopmental Research, the Jeongho Kim Neurodevelopmental Research Fund, Perry Ha, the Simons Center for the Social Brain, the Simons Foundation Autism Research Initiative, The Picower Institute for Learning and Memory, and The Freedom Together Foundation.
Many organizations are taking actions to shrink their carbon footprint, such as purchasing electricity from renewable sources or reducing air travel.
Both actions would cut greenhouse gas emissions, but which offers greater societal benefits?
In a first step toward answering that question, MIT researchers found that even if each activity reduces the same amount of carbon dioxide emissions, the broader air quality impacts can be quite different.
They used a multifaceted modeling approach to quantify the air quality impacts of each activity, using data from three organizations. Their results indicate that air travel causes about three times more damage to air quality than comparable electricity purchases.
Exposure to major air pollutants, including ground-level ozone and fine particulate matter, can lead to cardiovascular and respiratory disease, and even premature death.
In addition, air quality impacts can vary dramatically across different regions. The study shows that air quality effects differ sharply across space because each decarbonization action influences pollution at a different scale. For example, for organizations in the northeast U.S., the air quality impacts of energy use affect the region, but the impacts of air travel are felt globally. This is because associated pollutants are emitted at higher altitudes.
Ultimately, the researchers hope this work highlights how organizations can prioritize climate actions to provide the greatest near-term benefits to people’s health.
“If we are trying to get to net zero emissions, that trajectory could have very different implications for a lot of other things we care about, like air quality and health impacts. Here we’ve shown that, for the same net zero goal, you can have even more societal benefits if you figure out a smart way to structure your reductions,” says Noelle Selin, a professor in the MIT Institute for Data, Systems, and Society (IDSS) and the Department of Earth, Atmospheric and Planetary Sciences (EAPS); director of the Center for Sustainability Science and Strategy; and senior author of the study.
Selin is joined on the paper by lead author Yuang (Albert) Chen, an MIT graduate student; Florian Allroggen, a research scientist in the MIT Department of Aeronautics and Astronautics; Sebastian D. Eastham, an associate professor in the Department of Aeronautics at Imperial College of London; Evan Gibney, an MIT graduate student; and William Clark, the Harvey Brooks Research Professor of International Science at Harvard University. The research was published Friday in Environmental Research Letters.
A quantification quandary
Climate scientists often focus on the air quality benefits of national or regional policies because the aggregate impacts are more straightforward to model.
Organizations’ efforts to “go green” are much harder to quantify because they exist within larger societal systems and are impacted by these national policies.
To tackle this challenging problem, the MIT researchers used data from two universities and one company in the greater Boston area. They studied whether organizational actions that remove the same amount of CO2 from the atmosphere would have an equivalent benefit on improving air quality.
“From a climate standpoint, CO2 has a global impact because it mixes through the atmosphere, no matter where it is emitted. But air quality impacts are driven by co-pollutants that act locally, so where those emissions occur really matters,” Chen says.
For instance, burning fossil fuels leads to emissions of nitrogen oxides and sulfur dioxide along with CO2. These co-pollutants react with chemicals in the atmosphere to form fine particulate matter and ground-level ozone, which is a primary component of smog.
Different fossil fuels cause varying amounts of co-pollutant emissions. In addition, local factors like weather and existing emissions affect the formation of smog and fine particulate matter. The impacts of these pollutants also depend on the local population distribution and overall health.
“You can’t just assume that all CO2-reduction strategies will have equivalent near-term impacts on sustainability. You have to consider all the other emissions that go along with that CO2,” Selin says.
The researchers used a systems-level approach that involved connecting multiple models. They fed the organizational energy consumption and flight data into this systems-level model to examine local and regional air quality impacts.
Their approach incorporated many interconnected elements, such as power plant emissions data, statistical linkages between air quality and mortality outcomes, and aviation emissions associated with specific flight routes. They fed those data into an atmospheric chemistry transport model to calculate air quality and climate impacts for each activity.
The sheer breadth of the system created many challenges.
“We had to do multiple sensitivity analyses to make sure the overall pipeline was working,” Chen says.
Analyzing air quality
At the end, the researchers monetized air quality impacts to compare them with the climate impacts in a consistent way. Monetized climate impacts of CO2 emissions based on prior literature are about $170 per ton (expressed in 2015 dollars), representing the financial cost of damages caused by climate change.
Using the same method as used to monetize the impact of CO2, the researchers calculated that air quality damages associated with electricity purchases are an additional $88 per ton of CO2, while the damages from air travel are an additional $265 per ton.
This highlights how the air quality impacts of a ton of emitted CO2 depend strongly on where and how the emissions are produced.
“A real surprise was how much aviation impacted places that were really far from these organizations. Not only were flights more damaging, but the pattern of damage, in terms of who is harmed by air pollution from that activity, is very different than who is harmed by energy systems,” Selin says.
Most airplane emissions occur at high altitudes, where differences in atmospheric chemistry and transport can amplify their air quality impacts. These emissions are also carried across continents by atmospheric winds, affecting people thousands of miles from their source.
Nations like India and China face outsized air quality impacts from such emissions due to the higher level of existing ground-level emissions, which exacerbates the formation of fine particulate matter and smog.
The researchers also conducted a deeper analysis of short-haul flights. Their results showed that regional flights have a relatively larger impact on local air quality than longer domestic flights.
“If an organization is thinking about how to benefit the neighborhoods in their backyard, then reducing short-haul flights could be a strategy with real benefits,” Selin says.
Even in electricity purchases, the researchers found that location matters.
For instance, fine particulate matter emissions from power plants caused by one university are in a densely populated region, while emissions caused by the corporation fall over less populated areas.
Due to these population differences, the university’s emissions resulted in 16 percent more estimated premature deaths than those of the corporation, even though the climate impacts are identical.
“These results show that, if organizations want to achieve net zero emissions while promoting sustainability, which unit of CO2 gets removed first really matters a lot,” Chen says.
In the future, the researchers want to quantify the air quality and climate impacts of train travel, to see whether replacing short-haul flights with train trips could provide benefits.
They also want to explore the air quality impacts of other energy sources in the U.S., such as data centers.
This research was funded, in part, by Biogen, Inc., the Italian Ministry for Environment, Land, and Sea, and the MIT Center for Sustainability Science and Strategy.
MIT chemists synthesize a fungal compound that holds promise for treating brain cancerPreliminary studies find derivatives of the compound, known as verticillin A, can kill some types of glioma cells.For the first time, MIT chemists have synthesized a fungal compound known as verticillin A, which was discovered more than 50 years ago and has shown potential as an anticancer agent.
The compound has a complex structure that made it more difficult to synthesize than related compounds, even though it differed by only a couple of atoms.
“We have a much better appreciation for how those subtle structural changes can significantly increase the synthetic challenge,” says Mohammad Movassaghi, an MIT professor of chemistry. “Now we have the technology where we can not only access them for the first time, more than 50 years after they were isolated, but also we can make many designed variants, which can enable further detailed studies.”
In tests in human cancer cells, a derivative of verticillin A showed particular promise against a type of pediatric brain cancer called diffuse midline glioma. More tests will be needed to evaluate its potential for clinical use, the researchers say.
Movassaghi and Jun Qi, an associate professor of medicine at Dana-Farber Cancer Institute/Boston Children’s Cancer and Blood Disorders Center and Harvard Medical School, are the senior authors of the study, which appears today in the Journal of the American Chemical Society. Walker Knauss PhD ’24 is the lead author of the paper. Xiuqi Wang, a medicinal chemist and chemical biologist at Dana-Farber, and Mariella Filbin, research director in the Pediatric Neurology-Oncology Program at Dana-Farber/Boston Children’s Cancer and Blood Disorders Center, are also authors of the study.
A complex synthesis
Researchers first reported the isolation of verticillin A from fungi, which use it for protection against pathogens, in 1970. Verticillin A and related fungal compounds have drawn interest for their potential anticancer and antimicrobial activity, but their complexity has made them difficult to synthesize.
In 2009, Movassaghi’s lab reported the synthesis of (+)-11,11'-dideoxyverticillin A, a fungal compound similar to verticillin A. That molecule has 10 rings and eight stereogenic centers, or carbon atoms that have four different chemical groups attached to them. These groups have to be attached in a way that ensures they have the correct orientation, or stereochemistry, with respect to the rest of the molecule.
Once that synthesis was achieved, however, synthesis of verticillin A remained challenging, even though the only difference between verticillin A and (+)-11,11'-dideoxyverticillin A is the presence of two oxygen atoms.
“Those two oxygens greatly limit the window of opportunity that you have in terms of doing chemical transformations,” Movassaghi says. “It makes the compound so much more fragile, so much more sensitive, so that even though we had had years of methodological advances, the compound continued to pose a challenge for us.”
Both of the verticillin A compounds consist of two identical fragments that must be joined together to form a molecule called a dimer. To create (+)-11,11'-dideoxyverticillin A, the researchers had performed the dimerization reaction near the end of the synthesis, then added four critical carbon-sulfur bonds.
Yet when trying to synthesize verticillin A, the researchers found that waiting to add those carbon-sulfur bonds at the end did not result in the correct stereochemistry. As a result, the researchers had to rethink their approach and ended up creating a very different synthetic sequence.
“What we learned was the timing of the events is absolutely critical. We had to significantly change the order of the bond-forming events,” Movassaghi says.
The verticillin A synthesis begins with an amino acid derivative known as beta-hydroxytryptophan, and then step-by-step, the researchers add a variety of chemical functional groups, including alcohols, ketones, and amides, in a way that ensures the correct stereochemistry.
A functional group containing two carbon-sulfur bonds and a disulfide bond were introduced early on, to help control the stereochemistry of the molecule, but the sensitive disulfides had to be “masked” and protected as a pair of sulfides to prevent them from breakdown under subsequent chemical reactions. The disulfide-containing groups were then regenerated after the dimerization reaction.
“This particular dimerization really stands out in terms of the complexity of the substrates that we’re bringing together, which have such a dense array of functional groups and stereochemistry,” Movassaghi says.
The overall synthesis requires 16 steps from the beta-hydroxytryptophan starting material to verticillin A.
Killing cancer cells
Once the researchers had successfully completed the synthesis, they were also able to tweak it to generate derivates of verticillin A. Researchers at Dana-Farber then tested these compounds against several types of diffuse midline glioma (DMG), a rare brain tumor that has few treatment options.
The researchers found that the DMG cell lines most susceptible to these compounds were those that have high levels of a protein called EZHIP. This protein, which plays a role in the methylation of DNA, has been previously identified as a potential drug target for DMG.
“Identifying the potential targets of these compounds will play a critical role in further understanding their mechanism of action, and more importantly, will help optimize the compounds from the Movassaghi lab to be more target specific for novel therapy development,” Qi says.
The verticillin derivatives appear to interact with EZHIP in a way that increases DNA methylation, which induces the cancer cells to undergo programmed cell death. The compounds that were most successful at killing these cells were N-sulfonylated (+)-11,11'-dideoxyverticillin A and N-sulfonylated verticillin A. N-sulfonylation — the addition of a functional group containing sulfur and oxygen — makes the molecules more stable.
“The natural product itself is not the most potent, but it’s the natural product synthesis that brought us to a point where we can make these derivatives and study them,” Movassaghi says.
The Dana-Farber team is now working on further validating the mechanism of action of the verticillin derivatives, and they also hope to begin testing the compounds in animal models of pediatric brain cancers.
“Natural compounds have been valuable resources for drug discovery, and we will fully evaluate the therapeutic potential of these molecules by integrating our expertise in chemistry, chemical biology, cancer biology, and patient care. We have also profiled our lead molecules in more than 800 cancer cell lines, and will be able to understand their functions more broadly in other cancers,” Qi says.
The research was funded by the National Institute of General Medical Sciences, the Ependymoma Research Foundation, and the Curing Kids Cancer Foundation.
Scientists get a first look at the innermost region of a white dwarf systemX-ray observations reveal surprising features of the dying star’s most energetic environment.Some 200 light years from Earth, the core of a dead star is circling a larger star in a macabre cosmic dance. The dead star is a type of white dwarf that exerts a powerful magnetic field as it pulls material from the larger star into a swirling, accreting disk. The spiraling pair is what’s known as an “intermediate polar” — a type of star system that gives off a complex pattern of intense radiation, including X-rays, as gas from the larger star falls onto the other one.
Now, MIT astronomers have used an X-ray telescope in space to identify key features in the system’s innermost region — an extremely energetic environment that has been inaccessible to most telescopes until now. In an open-access study published in the Astrophysical Journal, the team reports using NASA’s Imaging X-ray Polarimetry Explorer (IXPE) to observe the intermediate polar, known as EX Hydrae.
The team found a surprisingly high degree of X-ray polarization, which describes the direction of an X-ray wave’s electric field, as well as an unexpected direction of polarization in the X-rays coming from EX Hydrae. From these measurements, the researchers traced the X-rays back to their source in the system’s innermost region, close to the surface of the white dwarf.
What’s more, they determined that the system’s X-rays were emitted from a column of white-hot material that the white dwarf was pulling in from its companion star. They estimate that this column is about 2,000 miles high — about half the radius of the white dwarf itself and much taller than what physicists had predicted for such a system. They also determined that the X-rays are reflected off the white dwarf’s surface before scattering into space — an effect that physicists suspected but hadn’t confirmed until now.
The team’s results demonstrate that X-ray polarimetry can be an effective way to study extreme stellar environments such as the most energetic regions of an accreting white dwarf.
“We showed that X-ray polarimetry can be used to make detailed measurements of the white dwarf's accretion geometry,” says Sean Gunderson, a postdoc in MIT’s Kavli Institute for Astrophysics and Space Research, who is the study’s lead author. “It opens the window into the possibility of making similar measurements of other types of accreting white dwarfs that also have never had predicted X-ray polarization signals.”
Gunderson’s MIT Kavli co-authors include graduate student Swati Ravi and research scientists Herman Marshall and David Huenemoerder, along with Dustin Swarm of the University of Iowa, Richard Ignace of East Tennessee State University, Yael Nazé of the University of Liège, and Pragati Pradhan of Embry Riddle Aeronautical University.
A high-energy fountain
All forms of light, including X-rays, are influenced by electric and magnetic fields. Light travels in waves that wiggle, or oscillate, at right angles to the direction in which the light is traveling. External electric and magnetic fields can pull these oscillations in random directions. But when light interacts and bounces off a surface, it can become polarized, meaning that its vibrations tighten up in one direction. Polarized light, then, can be a way for scientists to trace the source of the light and discern some details about the source’s geometry.
The IXPE space observatory is NASA’s first mission designed to study polarized X-rays that are emitted by extreme astrophysical objects. The spacecraft, which launched in 2021, orbits the Earth and records these polarized X-rays. Since launch, it has primarily focused on supernovae, black holes, and neutron stars.
The new MIT study is the first to use IXPE to measure polarized X-rays from an intermediate polar — a smaller system compared to black holes and supernovas, that nevertheless is known to be a strong emitter of X-rays.
“We started talking about how much polarization would be useful to get an idea of what’s happening in these types of systems, which most telescopes see as just a dot in their field of view,” Marshall says.
An intermediate polar gets its name from the strength of the central white dwarf’s magnetic field. When this field is strong, the material from the companion star is directly pulled toward the white dwarf’s magnetic poles. When the field is very weak, the stellar material instead swirls around the dwarf in an accretion disk that eventually deposits matter directly onto the dwarf’s surface.
In the case of an intermediate polar, physicists predict that material should fall in a complex sort of in-between pattern, forming an accretion disk that also gets pulled toward the white dwarf’s poles. The magnetic field should lift the disk of incoming material far upward, like a high-energy fountain, before the stellar debris falls toward the white dwarf’s magnetic poles, at speeds of millions of miles per hour, in what astronomers refer to as an “accretion curtain.” Physicists suspect that this falling material should run up against previously lifted material that is still falling toward the poles, creating a sort of traffic jam of gas. This pile-up of matter forms a column of colliding gas that is tens of millions of degrees Fahrenheit and should emit high-energy X-rays.
An innermost picture
By measuring any polarized X-rays emitted by EX Hydrae, the team aimed to test the picture of intermediate polars that physicists had hypothesized. In January 2025, IXPE took a total of about 600,000 seconds, or about seven days’ worth, of X-ray measurements from the system.
“With every X-ray that comes in from the source, you can measure the polarization direction,” Marshall explains. “You collect a lot of these, and they’re all at different angles and directions which you can average to get a preferred degree and direction of the polarization.”
Their measurements revealed an 8 percent polarization degree that was much higher than what scientists had predicted according to some theoretical models. From there, the researchers were able to confirm that the X-rays were indeed coming from the system’s column, and that this column is about 2,000 miles high.
“If you were able to stand somewhat close to the white dwarf’s pole, you would see a column of gas stretching 2,000 miles into the sky, and then fanning outward,” Gunderson says.
The team also measured the direction of EX Hydrae’s X-ray polarization, which they determined to be perpendicular to the white dwarf’s column of incoming gas. This was a sign that the X-rays emitted by the column were then bouncing off the white dwarf’s surface before traveling into space, and eventually into IXPE’s telescopes.
“The thing that’s helpful about X-ray polarization is that it’s giving you a picture of the innermost, most energetic portion of this entire system,” Ravi says. “When we look through other telescopes, we don’t see any of this detail.”
The team plans to apply X-ray polarization to study other accreting white dwarf systems, which could help scientists get a grasp on much larger cosmic phenomena.
“There comes a point where so much material is falling onto the white dwarf from a companion star that the white dwarf can’t hold it anymore, the whole thing collapses and produces a type of supernova that’s observable throughout the universe, which can be used to figure out the size of the universe,” Marshall offers. “So understanding these white dwarf systems helps scientists understand the sources of those supernovae, and tells you about the ecology of the galaxy.”
This research was supported, in part, by NASA.