Millions of students nationwide use text-supplemented audiobooks, learning tools that are thought to help those who struggle with reading keep up in the classroom. A new study from scientists at MIT’s McGovern Institute for Brain Research finds that many students do benefit from the audiobooks, gaining new vocabulary through the stories they hear. But study participants learned significantly more when audiobooks were paired with explicit one-on-one instruction — and this was especially true for students who were poor readers. The group’s findings were reported on March 17 in the journal Developmental Science.
“It is an exciting moment in this ed-tech space,” says Grover Hermann Professor of Health Sciences and Technology John Gabrieli, noting a rapid expansion of online resources meant to support students and educators. “The admirable goal in all this is: Can we use technology to help kids progress, especially kids who are behind for one reason or another?” His team’s study — one of few randomized, controlled trials to evaluate educational technology — suggests a nuanced approach is needed as these tools are deployed in the classroom. “What you can get out of a software package will be great for some people, but not so great for other people,” Gabrieli says. “Different people need different levels of support.” Gabrieli is also a professor of brain and cognitive sciences and an investigator at the McGovern Institute.
Ola Ozernov-Palchik and Halie Olson, scientists in Gabrieli’s lab, launched the audiobook study in 2020, when most schools in the United States had closed to slow the spread of Covid-19. The pandemic meant the researchers would not be able to ask families to visit an MIT lab to participate in the study — but it also underscored the urgency of understanding which educational technologies are effective, and for whom.
“What we were really concerned about as the pandemic hit is that the types of gaps that we see widen through the summers — the summer slide that affects poor readers and disadvantaged children to a greater extent — would be amplified by the pandemic,” says Ozernov-Palchik. Many educational technologies purport to ameliorate these gaps. But, Ozernov-Palchik says, “fewer than 10 percent of educational technology tools have undergone any type of research. And we know that when we use unproven methods in education, the students who are most vulnerable are the ones who are left further and further behind.”
So the team designed a study that could be done remotely, involving hundreds of third- and fourth-graders around the country. They focused on evaluating the impact of audiobooks on children’s vocabularies, because vocabulary knowledge is so important for educational success. Ozernov-Palchik explains that books are important for exposing children to new words, and when children miss out on that experience because they struggle to read, they can fall further behind in school.
Audiobooks allow students to access similar content in a different way. For their study, the researchers partnered with Learning Ally, an organization that produces audiobooks synchronized with highlighted text on a computer screen, so students can follow along as they listen.
“The idea is, they’re going to learn vocabulary implicitly through accessing those linguistically rich materials,” Ozernov-Palchik says. But that idea was untested. In contrast, she says, “we know that really what works in education, especially for the most vulnerable students, is explicit instruction.”
Before beginning their study, Ozernov-Palchik and Olson trained a team of online tutors to provide that explicit instruction. The tutors — college students with no educational expertise — learned how to apply proven educational methods to support students’ learning and understanding of challenging new words they encountered in their audiobooks.
Students in the study were randomly assigned to an eight-week intervention. Some were asked to listen to Learning Ally audiobooks for about 90 minutes a week. Another group received one-on-one tutoring twice a week, in addition to listening to audiobooks. A third group, in which students participated in mindfulness practice without using audiobooks or receiving tutoring, served as a control.
A diverse group of students participated, spanning different reading abilities and socioeconomic backgrounds. The study’s remote design — with flexibly scheduled testing and tutoring sessions conducted over Zoom — helped make that possible. “I think the pandemic pushed researchers to rethink how we might use these technologies to make our research more accessible and better represent the people that we’re actually trying to learn about,” says Olson, a postdoc who was a graduate student in Gabrieli’s lab.
Testing before and after the intervention showed that overall, students in the audiobooks-only group gained vocabulary. But on their own, the books did not benefit everyone. Children who were poor readers showed no improvement from audiobooks alone, but did make significant gains in vocabulary when the audiobooks were paired with one-on-one instruction. Even good readers learned more vocabulary when they received tutoring, although the differences for this group were less dramatic.
Individualized, one-on-one instruction can be time-consuming, and may not be routinely paired with audiobooks in the classroom. But the researchers say their study shows that effective instruction can be provided remotely, and you don’t need highly trained professionals to do it.
For students from households with lower socioeconomic status, the researchers found no evidence of significant gains, even when audiobooks were paired with explicit instruction — further emphasizing that different students have different needs. “I think this carefully done study is a note of caution about who benefits from what,” Gabrieli says.
The researchers say their study highlights the value and feasibility of objectively evaluating educational technologies — and that effort will continue. At Boston University, where she is a research assistant professor, Ozernov-Palchik has launched a new initiative to evaluate artificial intelligence-based educational tools’ impacts on student learning.
Slice and diceSNIPE, a newly characterized biological defense system, directly protects bacteria by chopping up invading viral DNA.What if the Trojan horse had been pulled to pieces, revealing the ruse and fending off the invasion, just as it entered the gates of Troy?
That’s an apt description of a newly characterized bacterial defense system that chops up foreign DNA.
Bacteria and the viruses that infect them, bacteriophages — phages for short — are ceaselessly at odds, with bacteria developing methods to protect themselves against phages that are constantly striving to overcome those safeguards.
New research from the Department of Biology at MIT, recently published in Nature, describes a defense system that is integrated into the protective membrane that encapsulates bacteria. SNIPE, which stands for surface-associated nuclease inhibiting phage entry, contains a nuclease domain that cleaves genetic material, chopping the invading phage genome into harmless fragments before it can appropriate the host’s molecular machinery to make more phages.
Daniel Saxton, a postdoc in the Laub Lab and the paper’s first author, was initially drawn to studying this bacterial defense system in E. coli, in part because it is highly unusual to have a nuclease that localizes to the membrane, as most nucleases are free-floating in the cytoplasm, the gelatinous fluid that fills the space inside cells.
“The other thing that caught my attention is that this is something we call a direct defense system, meaning that when a phage infects a cell, that cell will actually survive the attack,” Saxton says. “It’s hard to fend off a phage directly in a cell and survive — but this defense system can do it.”
Light it up
For Saxton, the project came into focus during a fluorescence-based experiment in which viral genetic material would light up if it successfully penetrated the bacteria.
“SNIPE was obliterating the phage DNA so fast that we couldn’t even see a fluorescent spot,” Saxton recalls. “I don’t think I’ve ever seen such an effective defense system before — you can barrage the bacteria with hundreds of phage per cell, but SNIPE is like god-tier protection.”
When the nuclease domain of SNIPE was mutated so it couldn’t chop up DNA, fluorescent spots appeared as usual, and the bacteria succumbed to the phage infection.
Bacteria maintain tight control over all their defense systems, lest they be turned against their host. Some systems remain dormant until they flare up, for example, to halt all translation of all proteins in the cell, while others can distinguish between bacterial DNA and foreign, invading phage DNA. There were only two previously characterized mechanisms in the latter category before researchers uncovered SNIPE.
“Right now, the phage field is at a really interesting spot where people are discovering phage defense systems at a breakneck pace,” Saxton says.
Problems at the periphery
Saxton says they had to approach the work in a somewhat roundabout way because there are currently no published structures depicting all the steps of phage genome injection. Studying processes at the membrane is challenging: Membranes are dense and chaotic, and phage genome injection is a highly transient process, lasting only a few minutes.
SNIPE seems to discern viral DNA by interacting with proteins the phage uses to tunnel through the bacteria’s protective membrane. This “subcellular localization,” according to Saxton, may also prevent SNIPE from inadvertently chopping up the bacteria’s own genetic material.
The model outlined in the paper is that one region of SNIPE binds to a bacterial membrane protein called ManYZ, while another region likely binds to the tape measure protein from the phage.
The tape measure protein got its name because it determines the length of the phage tail — the part of the phage between the small, leglike protrusions and the bulbous head, which contains the phage’s genetic material. The researchers revealed that the phage’s tape measure protein enters the cytoplasm during injection, a phenomenon that had not been physically demonstrated before.
There may also be other proteins or interactions involved.
“If you shunt the phage genome injection through an alternate pathway that isn’t ManYZ, suddenly SNIPE doesn’t defend against the phage nearly as well,” Saxton says. “It’s unclear exactly how these proteins interact, but we do know that these two proteins are involved in this genome injection process.”
Future directions
Saxton hopes that future work will expand our understanding of what occurs during phage genome injection and uncover the structures of the proteins involved, especially the tunnel complex in the membrane through which phages insert their genome.
Members of the Laub Lab are already collaborating with another lab to determine the structure of SNIPE. In the meantime, Saxton has been working on a new defense system in which molecular mimicry — bacterial proteins imitating phage proteins — may play a role.
Michael T. Laub, the Salvador E. Luria Professor of Biology and a Howard Hughes Medical Institute investigator, notes that one of the breakthrough experiments for demonstrating how SNIPE works came from a brainstorming session at a lab retreat.
“Daniel and I were kind of stuck with how to directly measure the effect of SNIPE during infection, but another postdoc in the lab, Ian Roney, who is a co-author on the paper, came up with a very clever idea that ultimately worked perfectly,” Laub recalls. “It’s a great example of how powerful internal collaborations can be in pushing our science forward.”
Physicists zero in on the mass of the fundamental W boson particleThe team’s ultra-precise measurement confirms the Standard Model’s predictions.When fundamental particles are heavier or lighter than expected, physicists’ understanding of the universe can tip into the unknown. A particle that is just beyond its predicted mass can unravel scientists’ assumptions about the forces that make up all of matter and space. But now, a new precision measurement has reset the balance and confirmed scientists’ theories, at least for one of the universe’s core building blocks.
In a paper appearing today in the journal Nature, an international team including MIT physicists reports a new, ultraprecise measurement of the mass of the W boson.
The W boson is one of two elementary particles that embody the weak force, which is one of the four fundamental forces of nature. The weak force enables certain particles to change identities, such as from protons to neutrons and vice versa. This morphing is what drives radioactive decay, as well as nuclear fusion, which powers the sun.
Now, scientists have determined the mass of the W boson by analyzing more than 1 billion proton-colliding events produced by the Large Hadron Collider (LHC) at CERN (the European Organization for Nuclear Research) in Switzerland. The LHC accelerates protons toward each other at close to the speed of light. When they collide, two protons can produce a W boson, among a shower of other particles.
Catching a W boson is nearly impossible, as it decays almost immediately into two types of particles, one of which, a neutrino, is so elusive that it cannot be detected. Scientists are left to measure the other particle, known as a muon, and model how it might add up to the total mass of its parent, the W boson. In the new study, scientists used the Compact Muon Solenoid (CMS) experiment, a particle detector at the LHC that precisely tracks muons and other particles produced in the aftermath of proton collisions.
From billions of proton-proton collisions, the team identified 100 million events that produced a W boson decaying to a muon and a neutrino. For each of these events, they carried out detailed analyses to narrow in on a precise mass measurement. In the end, they determined that the W boson has a mass of 80360.2 ± 9.9 megaelectron volts (MeV). This new mass is in line with predictions of the Standard Model, which is physicists’ best rulebook for describing the fundamental particles and forces of nature.
The precision of the new measurement is on par with a previous measurement made in 2022 by the Collider Detector at Fermilab (CDF). That measurement took physicists by surprise, as it was significantly heavier than what the Standard Model predicted, and therefore raised the possibility of “new physics,” such as particles and forces that have yet to be discovered.
Because the new CMS measurement is just as precise as the CDF result and agrees with the Standard Model along with a number of other experiments, it is more likely that physicists are on solid ground in terms of how they understand the W boson.
“It’s just a huge relief, to be honest,” says Kenneth Long, a lead author of the study, who is a senior postdoc in MIT’s Laboratory for Nuclear Science. “This new measurement is a strong confirmation that we can trust the Standard Model.”
The study is authored by more than 3,000 members of CERN’s CMS Collaboration. The core group who worked on the new measurement includes about 30 scientists from 10 institutions, led by a team at MIT that includes Long; Tianyu Justin Yang PhD ’24; David Walter and Jan Eysermans, who are both MIT postdocs in physics; Guillelmo Gomez-Ceballos, a principal research scientist in the Particle Physics Collaboration; Josh Bendavid, a former research scientist; and Christoph Paus, a professor of physics at MIT and principal investigator with the Particle Physics Collaboration.
Piecing together
The W boson was first discovered in 1983 and is predicted to be the fourth heaviest among all the fundamental particles. Multiple experiments have aimed to narrow in on the particle’s mass, with varying degrees of precision. For the most part, these experiments have produced measurements that agree with the Standard Model’s predictions. The 2022 measurement by Fermilab’s CDF experiment is the one significant outlier. It also happens to be the most precise experiment to date.
“If you take the CDF measurement at face value, you would say there must be physics beyond the Standard Model,” says co-author Christoph Paus. “And of course that was the big mystery.”
Paus and his colleagues sought to either support or refute the CDF’s findings by making an independent measurement, with an experiment that matches CDF’s precision. Their new W boson mass measurement is a product of 10 years’ worth of work, both to analyze actual particle collision events and to simulate all the scenarios that could produce those events.
For their new study, the physicists analyzed proton collision events that were produced at the LHC in 2016. When it is running, the particle collider generates proton collisions at a furious rate of about one every 25 nanoseconds. The team analyzed a portion of the LHC’s 2016 dataset that encompasses billions of proton-proton collisions. Among these, they identified about 100 million events that produced a very short-lived W boson.
“A particle like the W boson exists for a teeny tiny moment — something like 10-24 seconds — before decaying to two particles, one of which is a neutrino that can’t be measured directly,” Long explains. “That’s the tricky part: You have to measure the other particle — a muon — really well, and be able to piece things together with only one piece of the puzzle.”
Gathering momentum
When a muon is produced from the decay of a W boson, it carries half of the W boson’s mass, which is converted into momentum that carries the muon away from the original collision. Due to the strong magnetic field inside the CMS detector, the electrically charged muon follows a path whose curvature is a function of its momentum. Scientists’ challenge is to track the muon’s path and every interaction it may have with other particles and its surroundings, in order to estimate its initial momentum.
The muon’s momentum is also influenced by the momentum of the W boson before it decays. Decoding the impact of the W boson’s motion from the effects of its mass presented a major challenge. To infer the W boson mass, the team first carried out simulations of every scenario they could think of that a muon might experience after a proton-proton collision in the chaotic environment of the particle collider. In all, the team produced 4 billion such simulated events described by state-of-the-art theoretical calculations. The simulations encoded diverse hypotheses about how the muon momentum is affected by the physical features of the CMS detector, as well as uncertainties in the predictions that govern W boson production in LHC collisions.
The researchers compared their simulations with data from the 2016 LHC run. For every proton-proton collision event that occurs in the collider, scientists can use the CMS detector at CERN’s LHC to precisely measure the energy and momentum of resulting particles such as muons. The team analyzed CMS measurements of muons that were produced from over 100 million W boson events. They then overlaid this data onto their simulations of the muon momentum, which they then converted to a new mass for the W boson.
That mass — 80360.2 ± 9.9 megaelectron volts — is significantly lighter than the CDF experiment’s measurement. What’s more, the new estimate is within the range of what the Standard Model predicts for the W boson’s mass, bolstering physicists’ confidence in the Standard Model and its descriptions of the major particles and forces of nature.
“With the combination of our really precise result and other experiments that line up with the Standard Model’s predictions, I think that most people would place their bets on the Standard Model,” Long says. “Though I do think people should continue doing this measurement. We are not done.”
“We want to add more data, make our analysis techniques more precise, and basically squeeze the lemon a little harder. There is always some juice left,” Paus adds. “With a better look, then we can say for certain whether we truly understand this one fundamental building block.”
This work was supported, in part, by multiple funding agencies, including the U.S. Department of Energy, and the SubMIT computing facility, sponsored by the MIT Department of Physics.
Sixteen new START.nano companies are developing hard-tech solutions with the support of MIT.nanoStartup accelerator program grows to over 30 companies, almost half of them with MIT pedigrees.MIT.nano has announced that 16 startups became active participants in its START.nano program in 2025, more than doubling the number of new companies from the previous year. Aimed at speeding the transition of hard-tech innovation to market, START.nano supports new ventures through the discounted use of MIT.nano shared facilities and a guided access to the MIT innovation ecosystem. The newly engaged startups are developing solutions for some of the world’s greatest challenges in health, climate, energy, semiconductors, novel materials, and quantum computing.
“The unique resources of MIT.nano enable not just the foundational research of academia, but the translation of that research into commercial innovations through startups,” says START.nano Program Manager Joyce Wu SM ’00, PhD ’07. “The START.nano accelerator supports early-stage companies from MIT and beyond with the tools and network they need for success.”
Launched in 2021, START.nano aims to increase the survival rate of hard-tech startups by easing their journey from the lab to the real world. In addition to receiving access to MIT.nano’s laboratories, program participants are invited to present at startup exhibits at MIT conferences, and in exclusive events including the newly launched PITCH.nano competition.
“For an early-stage startup working at the frontier of superconductor discovery, the combination of infrastructure and community has been irreplaceable,” says Jason Gibson, CEO and co-founder of Quantum Formatics. “START.nano isn’t just a resource,” adds Cynthia Liao MBA ’24, CEO and co-founder of Vertical Semiconductor. “It’s a strategic advantage that accelerates our roadmap, allowing us to iterate quickly to meet customer needs and strengthen our competitive edge.”
Although an MIT affiliation is not required, five of the 16 companies in the new cohort are led by MIT alumni, and an additional three have MIT affiliation. In total, 49 percent of the startups in START.nano are founded by MIT graduates.
Here are the intended impacts of the 16 new START.nano companies:
Acorn Genetics is developing a "smartphone of sequencing," launching the power of genetic analysis out of slow, centralized labs and into the hands of consumers for fast, portable, and affordable sequencing.
Addis Energy leverages oil, gas, and geothermal drilling technologies to unlock the chemical potential of iron-rich rocks. By injecting engineered fluids, they harness the earth’s natural energy to produce ammonia that is both abundant and cost-effective.
Augmend Health uses virtual reality and AI to deliver clinical data intelligence services for specialty care that turns incomplete documentation into revenue, compliance, and better treatment decisions.
Brightlight Photonics is building high-performance laser infrastructure at chip scale, integrating Titanium:Sapphire gain to deliver broadband, high-power, low-noise optical sources for advanced photonic systems.
Cahira Technologies is creating the new paradigm of brain-computer symbiosis for treating intractable diseases and human augmentation through autonomous, nonsurgical neural implants.
Copernic Catalysts is leveraging computational modeling to develop and commercialize transformational catalysts for low-cost and sustainable production of bulk chemicals and e-fuels.
Daqus Energy is unlocking high-energy lithium-ion batteries using critical metal-free organic cathodes.
Electrified Thermal Solutions is reinventing the firebrick to electrify industrial heat.
Guardion is making analytical instruments, chemical detectors, and radiation detectors more sensitive, portable, and easier to scale with nanomaterial-based ion detectors.
Mantel Capture is designing carbon capture materials to operate at the high temperatures found inside boilers, kilns, and furnaces — enabling highly efficient carbon capture that has not been possible until now.
nOhm Devices is developing highly-efficient cryogenic electronics for quantum computers and sensors.
Quantum Formatics is speeding discovery of the world’s next superconductors using proprietary AI.
Qunett is building the foundational hardware stack for deployable quantum networks to power the next era of global connectivity.
Rheyo is developing new ways to make dental care more effective, efficient, and easy through advanced materials and technology.
Vertical Semiconductor is commercializing high-voltage, high-density, high-efficiency vertical GaN (gallium nitride) to power the next era of compute.
VioNano Innovations is developing specialty material solutions that reduce variability and improve precision in semiconductor manufacturing, allowing chipmakers to build even smaller, faster, and more cost-effective chips.
START.nano now comprises over 32 companies and 11 graduates — ventures that have moved beyond the prototyping stages, and some into commercialization. See the full list here.
Researchers develop molecular editing tool to relocate alcohol groupsThis new technique will allow chemists to efficiently fine-tune the chemical structure of an organic molecule.A significant challenge for researchers in materials science and drug discovery is that even the most minor change to a molecule’s structure can completely alter its function. Historically, making these adjustments meant researchers had to re-synthesize the target molecule from scratch — a time-consuming and expensive bottleneck akin to tearing down a house just to move a lamp.
In an exciting discovery recently published in Nature, MIT chemists led by Professor Alison Wendlandt have developed a precision technique that allows scientists to seamlessly relocate alcohol functional groups from one spot on a molecule to a neighboring site. This process bypasses the need to rebuild the entire structure and is the result of a multi-year collaboration with Bristol Myers Squibb.
Functional group repositioning
Using a special light-sensitive molecule called decatungstate as a catalyst, the reaction triggers a highly controlled “migration” of the alcohol group. The process is remarkably predictable, ensuring the molecule retains its precise 3D shape and orientation throughout the move.
The ability to implement subtle structural tweaks without the waste of “from-scratch” synthesis eliminates a primary hurdle that has long plagued the field. Furthermore, because the reaction is gentle enough to work on complex, nearly finished structures, it serves as a powerful fine-tuning tool for late-stage drug candidates.
Precision editing to unlock new chemical designs
When combined with existing chemical methods, this tool provides new pathways to create challenging molecular architectures and oxygenation patterns that were previously out of reach.
“This alcohol migration strategy allows for precise, molecular-level tuning of oxygen atom positions,” says Qian Xu, the co-first author of the paper and a postdoc in the Wendlandt Group. “With predictable stereo- and regioselectivity and late-stage operability, it presents an enticing chance to modify natural products and drug molecules through ‘editing.’”
Ultimately, this precision editing tool holds the potential to dramatically improve the efficiency of molecular design campaigns, accelerating the development of new pharmaceuticals, materials, and agrochemicals.
In addition to Wendlandt and Xu, MIT contributors include co-lead author and graduate student Yichen Nie, recent postdoc Ronghua Zhang, and professor of chemistry Jeremiah A. Johnson. Other authors include Jacob-Jan Haaksma of the University of Groningen in The Netherlands; Natalie Holmberg-Douglas, Farid van der Mei, and Chloe Williams of of Bristol Myers Squibb; and Paul M. Scola of Actithera.
Study reveals “two-factor authentication” system that controls microRNA destructionResearchers uncovered how cells selectively destroy certain microRNAs — key gene regulators — through a mechanism that requires two RNA signals working together.Cells rely on tiny molecules called microRNAs to tune which genes are active and when. Cells must carefully control the lifespan of microRNAs to prevent widespread disruption to gene regulation.
A new study led by researchers at MIT’s Whitehead Institute for Biomedical Research and Germany’s Max Planck Institute of Biochemistry reveals how cells selectively eliminate certain microRNAs through an unexpectedly intricate molecular recognition system. The open-access work, published on March 18 in Nature, shows that the process requires two separate RNA signals, similar to how many digital systems require two forms of identity verification before granting access.
The findings explain how cells use this “two-factor authentication” system to ensure that only intended microRNAs are destroyed, leaving the rest of the gene regulation machinery in operation.
MicroRNAs are short strands of RNA that help control gene expression. Working together with a protein called Argonaute, they bind to specific messenger RNAs — the molecules that carry genetic instructions from DNA to the cell’s protein-making machinery — and trigger their destruction. In this way, microRNAs can reduce the production of specific proteins.
While scientists recognized that microRNAs could be destroyed through a pathway known as target-directed microRNA degradation, or TDMD, the details of how cells recognized which microRNAs to eliminate remained unclear.
“We knew there was a pathway that could target microRNAs for degradation, but the biochemical mechanism behind it wasn’t understood,” says MIT Professor David Bartel, a Whitehead Institute member and co-senior author of the study.
Earlier work from Bartel’s lab and others had identified a key player in this pathway: the ZSWIM8 E3 ubiquitin ligase. E3 ubiquitin ligases are involved in the cell’s recycling system and attach a small molecular tag called ubiquitin to other proteins, marking them for destruction.
The researchers first showed that the ZSWIM8 E3 ligase specifically binds and tags Argonaute, the protein that holds microRNAs and helps regulate genes. The researchers’ next challenge was to understand how this machinery recognized only Argonaute complexes carrying specific microRNAs that should be degraded.
The answer turned out to be surprisingly sophisticated.
Using a combination of biochemistry and cryo-electron microscopy — an imaging technique that reveals molecular structures at near-atomic resolution — the researchers discovered that the degradation system relies on a dual-RNA recognition process. First, Argonaute must carry a specific microRNA. Second, another RNA molecule called a “trigger RNA” must bind to that microRNA in a particular way.
The degradation machinery activates only when both signals are present.
This dual requirement ensures exquisite specificity. Each cell contains over a hundred thousand Argonaute–microRNA complexes regulating many genes, and destroying them indiscriminately would disrupt essential biological processes.
“The vast majority of Argonaute molecules in the cell are doing useful work regulating gene expression,” says Bartel, who is a professor of biology at MIT and also a Howard Hughes Medical Institute investigator. “You only want to degrade the ones carrying a particular microRNA and bound to the right trigger RNA. Without that specificity, the cell would lose its microRNAs and the essential regulation that they provide.”
The structural images revealed complex molecular interactions. The ZSWIM8 ligase detects multiple structural changes that occur when the two RNAs bind together within the Argonaute protein.
“When we saw the structure, everything clicked,” says Elena Slobodyanyuk, a graduate student in Bartel’s lab and co-first author of the study. “You could see how the pairing of the trigger RNA with the microRNA reshapes the Argonaute complex in a way that the ligase can recognize.”
Beyond explaining how TDMD works, the findings may impact how scientists think about the regulation of RNA molecules more broadly.
“A lot of E3 ligases recognize their targets through simpler signals,” says Jakob Farnung, co-first author and researcher in the Department of Molecular Machines and Signaling at the Max Planck Institute of Biochemistry. “It was like opening a treasure chest where every detail revealed something new and mesmerizing.”
MicroRNAs typically persist in cells for much longer time periods than most messenger RNAs, but some degrade far more quickly, and the TDMD pathway appears to account for many of these unusually short-lived microRNAs.
The researchers are now investigating whether other RNAs can trigger similar degradation pathways and whether additional microRNAs are regulated through variations of the mechanism shown in this study.
“This opens up a whole new way of thinking about how RNA molecules can control protein degradation,” says Brenda Schulman, study co-senior author and director of the Department of Molecular Machines and Signaling at the Max Planck Institute of Biochemistry. “Here, the recognition was far more elaborate than expected. There’s likely much more left to discover.”
Uncovering the details of this intricate regulatory system required interdisciplinary collaboration, combining expertise in RNA biochemistry, structural biology, and ubiquitin enzymology to solve this long-standing molecular puzzle.
“This was a project that required the strengths of two labs working at the forefront of their fields,” says Schulman, who is also an alum of Whitehead Institute. “It was an incredible team effort.”
How bacteria suppress immune defenses in stubborn wound infectionsStudy finds a common bacterium can suppress the body’s early warning system in wounds, causing infections to persist and create an environment that allows other bacteria to take hold.Chronic wound infections are notoriously difficult to manage because some bacteria can actively interfere with the body’s immune defenses. In wounds, Enterococcus faecalis (E. faecalis) is particularly resilient — it can survive inside tissues, alter the wound environment, and weaken immune signals at the injury site. This disruption creates conditions where other microbes can easily establish themselves, resulting in multi-species infections that are complex and slow to resolve. Such persistent wounds, including diabetic foot ulcers and post-surgical infections, place a heavy burden on patients and health care systems, and sometimes lead to serious complications such as amputations.
Now, researchers have discovered how E. faecalis releases lactic acid to acidify its surroundings and suppresses the immune-cell signal needed to start a proper response to infection. By silencing the body’s defenses, the bacterium can cause persistent and hard-to-treat wound infections. This explains why some wounds struggle to heal, even with treatment, and why infections involving multiple bacteria are especially difficult to eradicate.
The work was led by researchers from the Singapore-MIT Alliance for Research and Technology (SMART) Antimicrobial Resistance (AMR) interdisciplinary research group, alongside collaborators from the Singapore Centre for Environmental Life Sciences Engineering at Nanyang Technological University (NTU Singapore), MIT, and the University of Geneva in Switzerland.
In a paper titled “Enterococcus faecalis-derived lactic acid suppresses macrophage activation to facilitate persistent and polymicrobial wound infections,” recently published in Cell Host & Microbe, the researchers documented how E. faecalis releases large amounts of lactic acid during infection. This acidity suppresses the activation of macrophages — immune cells that normally help to clear infections — and interferes with several important internal processes that help the cell recognize and respond to infection. As a result, the mechanisms that cells rely on to send out “danger” signals are suppressed, leaving the macrophages unable to fully activate.
Researchers found that E. faecalis uses a two‑step mechanism to achieve this. Lactic acid enters the macrophages through a lactate transporter called MCT‑1 and also binds to a lactate-sensing receptor, GPR81, on the cell surface. By engaging both pathways, the bacterium effectively shuts down downstream immune signalling and blocks the macrophage’s inflammatory response, allowing E. faecalis to persist in the wound much longer than it should. Specifically, the lactic acid prevents a key immune alarm signal, known as NF-κB, from switching on inside these cells.
This was proven in a mouse wound model, where strains of E. faecalis that could not make lactic acid were cleared much more quickly, and the wounds also showed stronger immune activity. In wounds infected with both E. faecalis and Escherichia coli, the weakened immune response caused by lactic acid also allowed E. coli to grow better. This explains why wound infections often involve multiple species of bacteria and become harder to treat over time, particularly since E. faecalis is among the most common bacteria found in chronic wounds.
“Chronic wound infections often fail not because antibiotics are powerless, but because the immune system has effectively been ‘switched off’ at the infection site. We found that E. faecalis floods the wound with lactic acid, lowering pH and muting the NF‑κB alarm inside macrophages — the very cells that should be calling for help. By pinpointing how acidity rewires immune signalling, we now have clear targets to reactivate the immune response,” says first author Ronni da Silva, research scientist at SMART AMR, former postdoc in the lab of co-author and MIT professor of biology Jianzhu Chen, and SCELSE-NTU visiting researcher.
“This discovery strengthens our understanding of host-pathogen interactions and offers new directions for developing treatments and wound care that target the bacteria’s immunosuppressive strategies. By revealing how the immune response is shut down, this research may help improve infection management and support better recovery outcomes for patients, especially those with chronic wounds or weakened immunity,” says Kimberly Kline, principal investigator at SMART AMR, SCELSE-NTU visiting academic, professor at the University of Geneva, and corresponding author of the paper.
By identifying lactic‑acid‑driven immune suppression as a root cause of persistent wound infections, this work highlights the potential of treatment approaches that support the immune system, rather than rely on antibiotics alone. This could lead to therapies that help wounds heal more reliably and reduce the risk of complications. Potential directions include reducing acidity in the wound or blocking the signals that lactic acid uses to switch off immune cells.
Building on their study, the researchers plan to explore validation in additional pathogens and human wound samples, followed by assessments in advanced preclinical models ahead of any potential clinical trials.
The research was partially supported by the National Research Foundation Singapore under its Campus for Research Excellence and Technological Enterprise program.
The electrons that power our society flow left and right through the circuitry in our electronics, back and forth along the transmission lines that make up our power grid, and up and down to light up every floor of every building. But the electrons in newly discovered “moiré crystals” move in much stranger ways. They can move left and right, back and forth, or up and down in our three-dimensional world, but these electrons also act as if they can teleport in and out of a mysterious fourth dimension of space that is perpendicular to our perceivable reality. Physicists have found that this strange, newly discovered quantum behavior has nothing to do with the electrons themselves and everything to do with the strange material environment in which they live.
The electrons in moiré crystals leap into a fourth dimension through a process called “quantum tunneling.” While a soccer ball sitting at the bottom of a hill will stay put until someone retrieves it, a quantum particle in a valley can jump out all on its own. Quantum tunneling may seem magical to us, but it is quite commonplace in the microscopic quantum world, on the length scales of atoms. Quantum tunneling is also important on larger length scales, particularly in large superconducting circuits that underlie an emerging landscape of quantum technology, as recognized by the 2025 Nobel Prize in Physics.
However, quantum tunneling in moiré crystals is different, in that once an electron tunnels, physicists have now measured that it acts as if it had tunneled into a completely different world and come back again, as if it had been transported through a fourth “synthetic” dimension.
In a paper published recently in the journal Nature, a team of MIT researchers realize a long-anticipated scalable technique for producing high-quality moiré materials as moiré crystals, overcoming a materials bottleneck for next-generation electronic applications. In addition, the electrons in these crystals act as if they can teleport through a fourth dimension of space, unlocking a realistic materials approach for realizing numerous theoretical predictions of higher-dimensional superconductivity and higher-dimensional topological properties in the laboratory.
The study’s co-lead authors are Kevin Nuckolls, a Pappalardo postdoc in physics at MIT, and Nisarga Paul PhD ’25, and the study’s corresponding author is Joe Checkelsky, professor of physics at MIT. In addition, the study’s MIT co-authors include Alan Chen, Filippo Gaggioli, Joshua Wakefield, and Liang Fu, along with collaborators at Harvard University, Toho University, and the National High Magnetic Field Laboratory.
Crystal perfection
To make a moiré material, physicists first start with atomically thin two-dimensional (2D) materials, like the thinnest sheets of carbon known as graphene. Moiré materials can be created by combining individual sheets of the same 2D material and twisting them back and forth with respect to one another. Moiré materials can also be created by combining two different 2D materials that are very similar, but not quite the same, which ensures that they can never perfectly match one another even when carefully aligned. Both of these methods create intricate interference patterns where the individual layers of moiré materials are nearly aligned in some areas and visibly misaligned in others. Physicists call these patterns “moiré superlattices,” named after historical French fabrics that show similarly beautiful patterns generated by overlaying two different threading patterns.
For more than a decade, moiré materials have completely reshaped how physicists design and control quantum material properties, and the physics labs at MIT have been the hotbed of transformative discoveries in this ever-growing research field. Pablo Jarillo-Herrero, the Cecil and Ida Green Professor of Physics at MIT, and Raymond Ashoori, professor of physics at MIT, were early adopters of new techniques for fabricating moiré materials. Together in 2014, their labs discovered that electrons in moiré materials made from graphene and the 2D material boron nitride live in an intricate quantum fractal known as “Hofstadter’s butterfly.” In 2018, Jarillo-Herrero’s lab discovered that moiré materials made from twisting two sheets of graphene were fertile grounds for unconventional superconductivity that, by some metrics, is one of the strongest superconductors ever discovered. Long Ju, the Lawrence C. and Sarah W. Biedenharn Associate Professor of Physics, and his lab discovered in 2024 that moiré materials made from multilayer graphene and boron nitride cause electrons to split apart into fractional pieces, a quantum phenomenon previously thought to be exclusively confined to extremely high magnetic fields, but now realized without the need for a magnetic field.
Common across all of these experiments, and those performed around the world, were the tireless efforts of students and postdocs in carefully assembling moiré material devices by hand, one at a time. To make a moiré material device, 2D materials like graphene are peeled using Scotch tape from rock-like crystals, such as graphite. Then, sticky polymer films and microscopes enable researchers to pick up different 2D materials one by one with a precise sequence of twist angles. Finally, these stacks of 2D materials are etched into individual devices that allow researchers to investigate their properties in the lab.
In their new study, Joe Checkelsky and his lab have discovered a new technique for generating moiré materials that skips over all of these laborious steps. Their new method takes an entirely different approach, and it’s one that can assemble moiré materials by the tens of thousands. Instead of assembling samples one by one and layer by layer, Checkelsky and his lab have found new chemical synthesis routes that enlist Mother Nature’s help to grow “moiré crystals” with high-quality moiré superlattices built into each of their layers. By analogy, if one were to think of previous generations of moiré materials like two stacked sheets of paper with different line spacings, Checkelsky has figured out how to generate entire libraries of encyclopedias whose odd-numbered pages and even-numbered pages have two different line spacings.
“It feels incredible for our team to have made this materials discovery, particularly at MIT,” says Nuckolls, co-lead author on the work. “Moiré materials have become a central focus of quantum materials research today in large part because of the work of our colleagues just down the hallway.”
In the end, it turns out that nature is by far the best at assembling moiré materials when given the right tools. The MIT team discovered that naturally grown moiré materials are nearly perfect and highly reproducible. This offers a long-anticipated proof-of-concept demonstration of a potentially scalable route to using moiré materials in next-generation electronics. Although there are many more obstacles to be overcome to transform these fundamental science results into usable technology, the team has demonstrated a crucial first step in the right direction.
4D in 4K
After discovering how to grow and manipulate moiré superlattices in moiré crystals, the team began to investigate their properties. Initially, the team found that the metallic properties of these materials were surprisingly complicated, but they soon shifted their perspective to think from a higher-dimensional point of view, an idea inspired by theoretical proposals made roughly half a century ago. To peer into this prospective four-dimensional quantum world, the team performed detailed studies of the electronic and magnetic properties of moiré crystals at very large magnetic fields. The electrons in common metals move in tight circular orbits when placed in a magnetic field. However, something very special happens when they move in moiré crystals with two different interfering lattices. This interference generates a moiré superlattice that is mathematically equivalent to an emergent four-dimensional “superspace” lattice. Guided by this new 4D superspace lattice, the team discovered that these electrons could now move through this fourth dimension when their motion aligns to the direction where the two competing lattices interfere the most.
“Metaphorically, our measurements uncover ‘shadows’ of emergent 4D landscape upon which the electrons live,” says Nuckolls. “By carefully analyzing these 3D silhouettes from different angles and perspectives, our measurement reconstructs the 4D landscape that guides electrons in moiré crystals.”
Although this extra synthetic dimension is fictitious and the electrons in moiré crystals are actually still stuck in our 3D reality, they simulate a four-dimensional quantum world so closely that the measured properties of moiré crystals appear as if the researchers had actually performed their experiments in 4D. It seems like moiré crystals aren’t particularly bothered by whether the fourth dimension is fictitious and synthetic or if it’s real. It’s all the same to them.
“Mathematically, the equations describing the electron dynamics in these crystals are four-dimensional,” says co-lead author Nisarga Paul. “The electrons propagate in the synthetic dimension just as they do in our world’s three physical dimensions. It’s hard to detect this motion, but one of the striking realizations was that a magnetic field can reveal fingerprints of this synthetic dimension in experimentally measurable electronic properties known as quantum oscillations.”
Going forward, the team will explore how a wide variety of material properties might benefit from extra synthetic dimensions, which now could be within reach of realization.
“It’s fascinating to consider what may be possible next,” Checkelsky says. “There are long-standing theoretical predictions for higher-dimensional conductors and superconductors, for example — materials of this type may offer a new platform to examine these experimentally in the laboratory.”
This research was supported, in part, by the Gordon and Betty Moore Foundation, the U.S. Department of Energy Office of Science, the U.S. Office of Naval Research, the U.S. Army Research Office, U.S. Air Force Office of Scientific Research, MIT Pappalardo Fellowships in Physics, the Swiss National Science Foundation, and the U.S. National Science Foundation.
This work was also carried out, in part, through the use of MIT.nano’s facilities.
Two physicists and a curious host walk into a studio…On GBH’s new show The Curiosity Desk, MIT LIGO researchers revel in the beauties of fundamental discovery science and MIT astronomers talk planetary defense.This March on The Curiosity Desk, GBH’s daily science show with host Edgar B. Herwick III, MIT scientists dropped by to address the questions: “How close are we to observing the dark universe?” (Thursday, March 12 episode) and “Is Earth prepared for asteroids?” (Thursday, March 26 episode).
Up first, Prof. Nergis Mavalvala, dean of the MIT School of Science, and Prof. Salvatore Vitale joined the host live in studio to talk about the science behind the Laser Interferometer Gravitational-wave Observatory (LIGO) and how LIGO has provided the ability to observe the universe in ways that have never been done before.
In addition to learning something new, Mavalvala explained how experimenting delivers an added piece of excitement: “pushing the technology, the precision of the instrument, requires you to be very inventive. There’s almost nothing in these experiments that you can go buy off a shelf. Everything you’re designing, everything is from scratch. You’re meeting very stringent requirements.”
Herwick likened how they might tweak or tinker with the experiment to souping up a car engine, and the LIGO scientists nodded – adding that in the most complex experiments, each bite-sized part on its own works well, and it’s the interfaces between them that scientists must get right.
While there, the two long-time colleagues also took a detour to explain how in physics experimentalists benefit from the work of theorists and vice versa. Mavalvala, whose work focuses on building the world’s most precise instruments to study physical phenomena, described the synergy between ideas that come from theory (work that Vitale does) and how you measure. (No, they assure Herwick, they don’t get into a lot of fights.)
In fact, it’s fantastic to have people from both worlds at MIT, said Vitale. Mavalvala agreed. “One of the things that’s really important about theory in science is that ultimately, in physics especially, it’s a bunch of math. And the important thing that you have to ask is, ‘does nature really behave that way?’ And how do you answer that question? You have to go out and measure. You have to go observe nature,” said Mavalvala.
As scientists fine-tune the gravitational wave detectors, they will inform what data are collected, what astrophysical objects they might find or hope to find – and the search for certain fainter, farther away, or more exotic objects can inform what enhancements they prioritize.
But what if I’m not interested in any of that, asked Herwick? Why should I care?
“To me, it falls in the category of for the betterment of humankind. You never know what is going to be useful. A lot of fundamental research was very far at the beginning from what turned out to be fundamental applications,” said Vitale, adding, “What they do on the instrument side has already now very important applications.”
Mavalvala was unequivocal, underscoring how pursuing curiosity is put to good use:
“When you’re making instruments that achieve that kind of precision, you’re inventing new technologies. [With LIGO] We’ve invented vibration isolation technologies to keep our mirrors really still. We’ve invented lasers that are quieter than any that were ever made before. We’ve invented photonic techniques that are allowing us to make applications even to far off things like quantum computing.
“So, this is one of the beauties of fundamental discovery science. A, you’ll discover something. But B you’ll be doing two things: you’ll be inventing the technologies of the future, and you’ll be training the generations of scientists who may go off to do completely different things, but this is what inspires them.”
Watch the full conversation below and on YouTube:
Planetary defense
Turning to objects beyond Earth – specifically, asteroids – Associate Professor Julien de Wit, along with research scientists Artem Burdanov and Saverio Cambioni, joined Herwick at the Curiosity Desk later in the month. They talked about their ongoing research to identify smaller asteroids (about the size of a school bus) using the James Webb Space Telescope and why planetary defense goes beyond thinking about the massive asteroids featured in movies like Armageddon. Notably, a lot of technology on earth depends on satellites, and asteroids pose the biggest threat to satellites.
“Dinosaurs didn’t need to care about an asteroid hitting the moon. Humanity a century ago didn’t care. Now, if [an asteroid] hits the moon, a lot of debris will be expelled and all those particles – big and small – they will affect the fleet of satellites around Earth. That’s a big potential problem, so we need to take that into account in our future,” said Burdanov.
There’s also a potential upside to being better able to detect and potentially “capture” asteroids, explained de Wit, all of it benefitted by new instruments. “It’s really an asteroid revolution going on… Our situational awareness of what’s out there is really about to change dramatically.”
He explains that one dream is to mine asteroids themselves for material to build or power next generation technologies or stations in space. “The way to reliably move into space is to use resources from space. We can’t just move stuff to build a full city. We use stuff from space.”
Echoing the sentiments expressed earlier in the month by MIT’s dean of science, the trio of asteroid explorers also described how the pursuits of planetary scientists can lead to unexpected rewards along the way. “We are swimming in an era that is data rich, and so what we do in our group and at MIT is mine that data to reveal the universe like never before,” says de Wit. “Revealing new populations of asteroids, new populations of planets, and making sense of our universe like we have never done.”
Watch the full conversation below and on the GBH YouTube channel:
Tune in to the Curiosity Desk some Thursdays to hear from MIT researchers as they visit Herwick and the production team.
Building the blocks of lifeComputational biologist Sergei Kotelnikov is working to develop new methods in protein modeling as part of the School of Science Dean’s Postdoctoral Fellowship.Billions of years ago, simple organic molecules drifted across Earth's primordial landscape — nothing more than basic chemical compounds. But as natural forces shaped the planet over hundreds of millions of years, these molecules began to interact and bond in increasingly complex ways. Along the way, something spectacular emerged: life.
“Life is, to some degree, magical,” says computational biologist Sergei Kotelnikov. Simple organic compounds congregate into polymers, which assemble into living cells and ultimately organisms — the whole being greater than the sum of its parts.
“You can write formulas on how a molecule behaves,” he says, referring to the world of quantum mechanics. “But yet somehow, a few orders of magnitude above, on a bigger scale, it gives rise to such a mystery.”
Kotelnikov builds models to analyze and predict the structure of these biomolecules, particularly proteins, the fundamental building blocks of every organism. This year, he joined MIT as part of the School of Science Dean’s Postdoctoral Fellowship to work with the Keating Lab, where researchers focus on protein structure, function, and interaction. Using machine learning, his goal is to develop new methods in protein modeling with potential applications that span from medicine to agriculture.
A hunger for problems to solve
Kotelnikov grew up in Abakan, Russia, a small city sitting right in the center of Eurasia. As a child, one of his favorite pastimes was playing with Lego bricks.
“It encouraged me to build new things, rather than just following instructions,” he says. “You can do anything.”
Kotelnikov’s father, whose background lies in engineering and economics, would often challenge him with math problems.
“Your brain — you can feel some kind of expansion of understanding how things work, and that’s a very satisfactory feeling,” Kotelnikov says.
This itch to solve problems led him to join science Olympiad competitions, and later, a science-focused public boarding school located near the Russian Academy of Sciences, from which he often encountered scientists.
“It was like a candy shop,” he recalls, describing the period as a life-changing experience.
In 2012, Kotelnikov began his bachelor of science in physics and applied mathematics at the Moscow Institute of Physics and Technology — considered one of the leading STEM universities in Russia, and globally — and continued there for his master’s degree. It was there that biology came into the picture.
During a course on statistical physics, Kotelnikov was first introduced to the idea of the “emergence of complexity.” He became fascinated by this “mysterious and attractive manifestation of biology … this evolution that sharpens the physical phenomenon” to create, drive, and shape life as we know it today. By the time he completed his master’s degree, he realized he had only scratched surface of the field of computational biology.
In 2018, he began his PhD at Stony Brook University in New York and began working with Dima Kozakov, who is recognized as one of the world’s leaders in predicting protein interactions and complex structures.
Studying the architecture of life
Proteins act like the bricks that construct an organism, underpinning almost every cellular process from tissue repair to hormone production. Like pieces of a Lego tower, their structures and interactions determine the functions that they carry out in a body.
However, diseases arise when they’re folded, curled, twisted, or connected in unusual ways. To develop medical interventions, scientists break down the tower and examine each individual piece to find the culprit and correct their shape and pairing. With limited experimental data on protein structures and interactions currently available, simulations developed by computational biologists like Kotelnikov provide crucial insight that inform fundamental understanding and applications like drug discovery.
With the guidance of Kozakov at Stony Brook’s Laufer Center for Physical and Quantitative Biology, Kotelnikov carried over his understanding of physics to create modeling methods that are more effective, efficient, reliable, and generalizable. Among them, he developed a new way of predicting the protein complex structures mediated by proteolysis-targeting chimeras, or PROTACs, a new class of molecules that can trigger the breakdown of specific proteins previously considered undruggable, such as those found in cancer.
PROTACs have been challenging to model, in part because they are composed of proteins that don’t naturally interact with each other, and because the linker that connects them is flexible. Imagine trying to guess the overall shape of a bendy Lego piece attached to two other pieces of different irregular, unmatched shapes. To efficiently find all possible configurations, Kotelnikov’s method conceptually cuts the linker into two halves and models each separately, then reformulates the problem and calculates it using a powerful algorithm called Fast Fourier Transform.
“It’s kind of like applied math judo that you sometimes need to do in order to make certain intractable computations tractable,” he says.
Kotelnikov’s state-of-the-art methods have been instrumental to his team’s top performance in numerous international challenges including the Critical Assessment of protein Structure Prediction (CASP) competition — the same contest in which the Nobel Prize-winning AlphaFold system for protein 3D structure prediction was presented.
Physics and machine learning
At MIT, Kotelnikov is working with Amy Keating, the Jay A. Stein (1968) Professor of Biology, biology department head, and professor of biological engineering, to study protein structure, function, and interactions.
A recognized leader in the field, Keating employs both computational and experimental methods to study proteins, their interactions, as well as how this can impact disease. By infusing physics with machine learning, Kotelnikov’s goal is to advance modeling methods that can vastly inform applications such as cancer immunology and crop protection.
“Kotelnikov stands to gain a lot from working closely with wet lab researchers who are doing the experiments that will complement and test his predictions, and my lab will benefit from his experience developing and applying advanced computational analyses,” says Keating.
Kotelnikov is also planning to work with professors Tommi Jaakkola and Tess Smidt in MIT’s Department of Electrical Engineering and Computer Science to explore a field called geometric deep learning. In particular, he aims to integrate physical and geometric knowledge about biomolecules into neural network architectures and learning procedures. This approach can significantly reduce the amount of data needed for learning, and improve the generalizability of resulting models.
Beyond the two departments, Kotelnikov is also excited to see how the diversity and interdisciplinary mix of MIT’s community will help him come up with ideas.
“When you’re building a model, you’re entering this imaginary world of assumptions and simplifications and it might feel challenging because of this disconnect with reality,” Kotelnikov says. “Being able to efficiently communicate with experimentalists is of high value.”
What if a technology could reanimate parts of the body that have lost their connection to the brain — like a bladder that can no longer empty due to a spinal cord injury, or intestines that can’t push food forward due to Crohn’s disease? What if this technology could also send sensations such as hunger or touch back to the brain?
New MIT research offers a glimpse into this future. In an open-access study published today in Nature Communications, the researchers introduce a novel myoneural actuator (MNA) that reprograms living muscles into fatigue-resistant, computer-controlled motors that can be implanted inside the body to restore movement in organs.
“We’ve built an interface that leverages natural pathways used by the nervous system so that we can seamlessly control organs in the body, while also enabling the transmission of sensory feedback to the brain,” says Hugh Herr, senior author of the study, a professor of media arts and sciences at the MIT Media Lab, co-director of the K. Lisa Yang Center for Bionics, and an associate member of the McGovern Institute for Brain Research at MIT. The study was co-led by Herr’s postdoc Guillermo Herrera-Arcos and former postdoc Hyungeun Song.
By repurposing existing muscle in the body, the researchers have developed the first “living” implant that uses rewired sensory nerves to revive paralyzed organs — which may present a new genre of medicine, where a person’s own tissue becomes the hardware.
Rewiring the brain-body interface
Many scientists have toiled to restore function in paralyzed organs, but it’s extremely challenging to design a technology that both communicates with the nervous system and doesn't fatigue over time. Some have tried to insert miniaturized actuators — small machines that can power bionic limbs — into the body. However, Herrera-Arcos says, “it’s hard to make actuators at the centimeter level, and they aren’t very efficient.” Others have focused on creating muscle tissue in the lab, but building muscles cell by cell is time-intensive and far from ready for human use.
Herr’s team tried something different.
“We engineered existing muscles to become an actuator, or motor, that reinstates motion in organs,” says Song.
To do this, the researchers had to navigate the delicate dynamics within the nervous system. The actuator would have to interface with the nervous system to work properly, but it must also somehow evade the brain’s control. “You don’t want the brain to consciously control the muscle actuator because you want the actuator to automatically control an organ, like the heart,” explains Herrera-Arcos. Establishing a computer-controlled muscle to move organs could ensure automatic function and also bypass damaged brain pathways.
Incorporating motor neurons into the actuator may help generate movement, but these neurons are directly controlled by the brain. “Sensory neurons, however, are wired to receive, not to command,” explains Song. “We thought we could leverage this dynamic and reroute motor signals through sensory fibers, making a computer — rather than the brain — the muscle’s new command center.”
To achieve this, sensory nerves would need to fuse fluidly with muscle, and scientists had not yet determined if this was possible. Remarkably, when the team replaced motor nerves in rodent muscle with sensory ones, “the sensory nerves re-innervated the muscles and formed functional synapses. It’s a tremendous discovery,” says Herrera-Arcos.
Sensory neurons not only enabled the use of a digital controller, but also helped curb muscle fatigue — increasing fatigue resistance in rodent muscle by 260 percent compared to native muscles. That’s because muscle fatigue depends largely on the diameter of the axons, or cable-like projections that innervate muscles. Motor neuron axons vary greatly in size, and when a motor nerve is electrically stimulated, the largest axons fire first — exhausting the muscle quickly. However, sensory axons are all nearly the same size, so the signal is broadcast more evenly across muscle fibers, avoiding fatigue, explains Herrera-Arcos.
Designing a biohybrid system
They combined all of these elements into a fatigue-resistant biohybrid motor called a myoneural actuator (MNA). By wrapping their actuator around a paralyzed intestine in a rodent, the researchers reinstated the organ’s squeezing motion. They also successfully controlled rodent calf muscles in an experiment designed to mimic residual muscle in human lower-limb amputations. Importantly, the MNA system transmitted sensory signals to the brain. “This suggests that our technology could seamlessly link organs to the brain. For example, we might be able to make a paralyzed stomach relay hunger,” explains Song.
Bringing their MNA to clinic will require further testing in larger animal models, and eventually, humans. But if it passes the regulatory gauntlet, their system could pave a smoother and safer path toward reviving static organs. Implanting MNAs would require a surgery that is already commonplace in clinic, the researchers say, and their system might be simpler and safer to implement than mechanical devices or organ transplants that introduce foreign material into the body.
The team is hopeful that their new technology could improve the lives of millions living with organ dysfunctions. “Today’s solutions are mostly synthetic: pacemakers and other mechanical assist devices. A living muscle actuator implanted alongside a weakened organ would be part of the body itself. That is a category of medicine different from anything seen in clinic,” explains Herrera-Arcos.
Song says that skin is of special interest. “Hypothetically, we could wrap MNAs around skin grafts to relay tactile feedback, such as strain or tension, which is currently missing for users of prostheses.” Their technology could even augment virtual reality systems, too. “The idea is that, if we couple the MNA system to skin and muscles, a person could feel what their virtual avatar is touching even though their real body isn’t moving,” says Song.
“Our research is on the brink of giving new life to various parts and extensions of the body,” adds Herrera-Arcos. “It’s exciting to think that our system could enhance human potential in ways that once only belonged to the realm of science fiction.”
This research was funded, in part, by the Yang Tan Collective at MIT, K. Lisa Yang Center for Bionics at MIT, Nakos Family Bionics Research Fund at MIT, and the Carl and Ruth Shapiro Foundation.
Climate change may produce “fast-food” phytoplanktonWith warmer ocean temperatures, the composition of marine plankton could shift from protein-rich to carb-heavy, a new study suggests.We are what we eat. And in the ocean, most life-forms source their food from phytoplankton. These microscopic, plant-like algae are the primary food source for krill, sea snails, some small fish, and jellyfish, which in turn feed larger marine animals that are prey for the ocean’s top predators, including humans.
Now MIT scientists are finding that phytoplankton's composition, and the basic diet of the ocean, will shift significantly with climate change.
In an open-access study appearing today in the journal Nature Climate Change, the team reports that as sea surface temperatures rise over the next century, phytoplankton in polar regions will adapt to be less rich in proteins, heavier in carbohydrates, and lower in nutrients overall.
The conclusions are based on results from the team’s new model, which simulates the composition of phytoplankton in response to changes in ocean temperature, circulation, and sea ice coverage. In a scenario in which humans continue to emit greenhouse gases through the year 2100, the team found that changing ocean conditions, particularly in the polar regions, will shift phytoplankton’s balance of proteins to carbohydrates and lipids by approximately 20 percent. The researchers analyzed observations from the past several decades, and already have found a signature of this change in the real world.
“We’re moving in the poles toward a sort of fast-food ocean,” says lead author and MIT postdoc Shlomit Sharoni. “Based on this prediction, the nutritional composition of the surface ocean will look very different by the end of the century.”
The study’s MIT co-authors are Mick Follows, Stephanie Dutkiewicz, and Oliver Jahn; along with Keisuke Inomura of the University of Rhode Island; Zoe Finkel, Andrew Irwin, and Mohammad Amirian of Dalhousie University in Halifax, Canada; and Erwan Monier of the University of California at Davis.
Nutritional information
Phytoplankton drift through the upper, sun-lit layers of the ocean. Like plants on land, the marine microalgae are photosynthetic. Their growth depends on light from the sun, carbon dioxide from the atmosphere, and nutrients such as nitrogen and iron that well up from the deep ocean.
When studying how phytoplankton will respond to climate change, scientists have primarily focused on how rising ocean temperatures will affect phytoplankton populations. Whether and how the plankton’s composition will change is less well-understood.
“There’s been an awareness that the nutritional value of phytoplankton can shift with climate change,” says Sharoni, “But there has been very little work in directly addressing that question.”
She and her colleagues set out to understand how ocean conditions influence phytoplankton macromolecular composition. Macromolecules are large molecules that are essential for life. The main types of macromolecules include proteins, lipids, carbohydrates, and nucleic acids (the building blocks of DNA and RNA). Every form of life, including phytoplankton, is composed of a balance of macromolecules that helps it to survive in its particular environment.
“Nearly all the material in a living organism is in these broad molecular forms, each having a particular physiological function, depending on the circumstances that the organism finds itself in,” says Follows, a professor in the Department of Earth, Atmospheric and Planetary Sciences.
An unbalanced diet
In their new study, the researchers first looked at how today’s ocean conditions influence phytoplankton’s macromolecular composition. The team used data from lab experiments carried out by their collaborators at Dalhousie. These experiments revealed ways in which phytoplankton’s balance of macromolecules, such as proteins to carbohydrates, shifted in response to changes in water temperature and the availability of light and nutrients.
With these lab-based data, the group developed a quantitative model that simulates how plankton in the lab would readjust its balance of proteins to carbohydrates under different light and nutrient conditions. Sharoni and Inomura then paired this new model with an established model of ocean circulation and dynamics developed previously at MIT. With this modeling combination, they simulated how phytoplankton composition shifts in response to ocean conditions in different parts of the world and under different climate scenarios.
The team first modeled today’s current climate conditions. Consistent with observations, their model predicts that that a little more than half of the average phytoplankton cell today is composed of proteins. The rest is a mix of carbohydrates and lipids.
Interestingly, in polar regions, phytoplankton are slightly more protein-rich. At the poles, the cover of sea ice limits the amount of sunlight phytoplankton can absorb. The researchers surmise that phytoplankton may have adapted by making more light-harvesting proteins to help the organisms efficiently absorb the weak sunlight.
However, when they modeled a future climate change scenario, the team found a significant shift in phytoplankton composition. They simulated a scenario in which humans continue to emit greenhouse gases through the year 2100. In this scenario, the ocean sea surface temperatures will rise by 3 degrees Celsius, substantially reducing sea ice coverage. Warmer temperatures will also limit the ocean’s circulation, as well as the amount of nutrients that can circulate up from the deep ocean.
Under these conditions, the model predicts that the population of phytoplankton growth in polar regions will increase significantly, consistent with earlier studies. Uniquely, this model predicts that phytoplankton in polar regions will shift from a protein-rich to a carb- and lipid-heavy composition. They found that plankton will not need as much light-harvesting protein, since less sea ice will make sunlight more easily available for the organisms to absorb. Total protein levels in these polar phytoplankton will decline by up to 30 percent, with a corresponding increase in the contribution of carbs and lipids.
It’s unclear what impact a larger population of carb- and lipid-heavy phytoplankton may have on the rest of the marine food web. While some organisms may be stressed by a reduction in protein, others that make lipid stores to survive through the winter might thrive.
The team also simulated phytoplankton in subtropical, higher-latitude regions. In these ocean areas, it’s expected that phytoplankton populations will decline by 50 percent. And the team’s modeling shows that their composition will also shift.
With warmer temperatures, the ocean’s circulation will slow down, limiting the amount of nutrients that can upwell from the deep ocean. In response, subtropical phytoplankton may have to find ways to live at deeper depths, to strike a balance between getting enough sunlight and nutrients. Under these conditions, the organisms will likely shift to a slightly more protein-rich composition, making use of the same photosynthetic proteins that their polar counterparts will require less of.
On balance, given the projected changes in phytoplankton populations with climate change, their average composition around the world will shift to a more carb-heavy, low-nutrient composition.
The researchers went a step further and found that their modeling agrees with available small set of actual phytoplankton field samples that other scientists previously collected from Arctic and Antarctic regions. These samples showed compositions of phytoplankton have become more carb- and lipid-heavy over the past few decades, as the team’s model predicts under climate warming.
“In these regions, you can already see climate change, because sea ice is already melting,” Sharoni explains. “And our model shows that proteins in polar plankton have been declining, while carbs and lipids are increasing.”
“It turns out that climate change is accelerated in the Arctic, and we have data showing that the composition of phytoplankton has already responded,” Follows adds. “The main message is: The caloric content at the base of the marine food web is already changing. And it’s not a clear story as to how this change will transmit through the food web.”
This work was supported, in part, by the Simons Foundation.
Leading with rigor, kindness, and care“We cannot be effective scientists if we are unhappy or unhealthy outside of the lab,” says “Committed to Caring” honoree Sara Prescott.Professor Sara Prescott embodies the kind of mentorship every graduate student hopes to find: grounded in scientific rigor, guided by kindness, and defined by a deep commitment to well-being. Her approach reflects a simple but powerful belief that transformative mentorship is not only about advancing research, but about cultivating confidence, belonging, and resilience in the next generation of scholars.
A member of the 2025–27 Committed to Caring cohort, Prescott exemplifies the program’s spirit, which honors faculty who go above and beyond in nurturing both the intellectual and personal development of MIT’s graduate students.
Prescott is the Pfizer Inc. - Gerald D. Laubach Career Development Professor in the MIT departments of Biology and Brain and Cognitive Sciences, and an investigator at the Picower Institute for Learning and Memory. Her research addresses fundamental questions in body-brain communication, with a focus on lung biology, early-life adversity, women’s health, and the impacts of climate change on respiratory health.
A culture of compassion
Prescott’s mentoring philosophy begins with a focus on professional sustainability. “We cannot be effective scientists if we are unhappy or unhealthy outside of the lab,” she says.
She pushes back against what she sees as an unhelpful narrative in academia. “There’s this idea that you must choose between a successful PhD or having a personal life. This is a false dichotomy, and a problematic attitude.” Instead, she reminds her mentees that “graduate school is a marathon, not a sprint,” encouraging them to place importance not only on their research, but also on their mental and physical well-being.
This set of values shines through within her lab climate as a whole. Students describe support for flexible scheduling and mental health leave, a willingness to reimburse meals during late-night lab sessions, and encouragement during stretches of experimental failure. In addition to these more technical supports, nominators also shared stories of Prescott engaging in the smaller details: prioritizing connection for her students, celebrating their milestones, organizing lab retreats, and fostering a culture where people feel valued beyond their productivity.
Students recognize Prescott as a safe haven within the often complex and challenging world of research. Joining Prescott’s lab was a turning point for one student who was recovering from a damaging prior mentorship experience. They arrived uncertain, struggling to trust faculty and questioning whether they belonged in science at all. Prescott met them with empathy and professionalism, offering patience and trust not just in their work, but in them as a person. They describe steady support that, over time, helped them “fall back in love with science” and envision a future they had nearly abandoned.
Prescott draws inspiration from the mentorship she received early in her career. As a trainee, she had mentors who helped her believe that she could succeed. Now in a mentoring role herself, she does her best to pass this sense of confidence on to her advisees.
She is intentional about creating space where students can grow without fear. From their very first meetings, one nominator wrote, Prescott emphasized that “graduate school is a place for learning and curiosity.” They never felt judged for not knowing something; instead, they were encouraged to ask questions, share ideas, and take intellectual risks. That environment, the student explained, allowed them to grow into their scientific identity with confidence.
Prescott reinforces this message often. Success, she tells students, grows from effort, learning, and persistence, rather than from fixed traits. When working with students, she does her best to reframe failure as part of the process, emphasizing its importance within the scientific journey. Through these avenues, she cultivates a lab culture where nominators are challenged to think boldly while feeling genuinely supported, and where her students are seen not only as researchers, but as whole people.
Advocacy beyond the bench
Prescott’s commitment to caring extends well beyond day-to-day lab work. Her nominators relate that she actively supports her students’ professional development, encouraging them to pursue writing projects, certificates, internships, leadership roles, and community engagement.
Nominators also highlight Prescott’s focus on supporting underserved communities within the field as a whole. Students highlight her involvement with Graduate Women in Biology (GwiBio), where she volunteered as a speaker for the “Glass Shards” series. Her talk “Failure as the Path to Success,” in which she candidly shared pivots and setbacks in her own career, was described as one of the organization’s most impactful sessions.
Her dedication to inclusion is equally evident in her mentorship of scholars whose role in her lab is more temporary. She welcomes international visiting scholars, temporary lab techs, and undergraduate interns in the MIT Summer Research Program. When one intern encountered barriers at their home institution, Prescott ensured they had a continued research home in her lab at MIT. These additional resources allowed them to complete their undergraduate thesis and graduate on time from their university.
Prescott says that she views mentorship as an evolving practice, regularly soliciting feedback from her students. Effective leadership, in her view, grows from mutual trust and open communication.
For many nominators, Prescott’s impact extends beyond their careers. “She has taught me what positive and supportive mentoring relationships look like,” one student reflected. “When I think about the type of mentor I want to be, I hope I can emulate the ways in which she supports and guides nominators to develop their scientific independence and confidence.”
In lifting up the people behind the science as thoughtfully as the science itself, Sara Prescott demonstrates that the most enduring legacy of a mentor is not only the discoveries from their lab, but the composure and courage their advisees carry forward.
“Near-misses” in particle accelerators can illuminate new physics, study findsPhysicists discovered new properties of the strong force by analyzing what happens when light-speed particles skim by each other.Particle accelerators reveal the heart of nuclear matter by smashing together atoms at close to the speed of light. The high-energy collisions produce a shower of subatomic fragments that scientists can then study to reconstruct the core building blocks of matter.
An MIT-led team has now used the world’s most powerful particle accelerator to discover new properties of matter, through particles’ “near-misses.” The approach has turned the particle accelerator into a new kind of microscope — and led to the discovery of new behavior in the forces that hold matter together.
In a study appearing this week in the journal Physical Review Letters, the team reports results from the Large Hadron Collider (LHC) — a massive underground, ring-shaped accelerator in Geneva, Switzerland. Rather than focus on the accelerator’s particle collisions, the MIT team searched for instances when particles barely glanced by each other.
When particles travel at close to the speed of light, they are surrounded by an electromagnetic halo that flattens when particles pass close but don’t collide. The pancaked energy fields produce extremely high-energy photons. Occasionally, a photon from one particle can ping off another particle, like an intense, quantum-sized pinprick of light.
The MIT team was able to pick out such near-miss pinpricks, or what scientists call “photonuclear interactions,” from the LHC’s particle-collision data. They found that when some photons pinged off a particle, they kicked out a type of subatomic particle, known as a D0 meson, that the scientists could measure for the first time.
D0 mesons are subatomic particles that contain a charm quark, a rare type of quark not normally found in ordinary nuclear matter. Quarks are the fundamental building blocks of all matter, and are bound by gluons, which are massless particles that are the invisible glue, or “strong force” that holds matter together. The rare charm quarks can only be created in high-energy interactions. As such, they provide an especially clean, unambiguous probe of quarks and gluons inside a nucleus.
Through their measurements of D0 mesons , the researchers could estimate how tightly gluons are packed, and, essentially, how strong the strong force is within a particle’s nucleus.
“Our result gives an indication that when nuclear matter is squeezed together, then gluons start behaving in a funny way,” says lead author Gian Michele Innocenti, an assistant professor of physics at MIT. “We need to know how these gluons behave in these extreme conditions because gluons keep the universe together. And at this point, photonuclear interactions are the best way we have to study gluon behavior.”
The study’s co-authors include members of the CMS Collaboration — a global consortium of physicists who operate and maintain the Compact Muon Solenoid (CMS) experiment, which is one of the largest detectors within the LHC that was used to collect the study’s data.
Bringing a “background” into focus
With each run, the Large Hadron Collider fires off needle-thin beams of particles in opposite directions around a 27-kilometer-long underground ring. When the beams cross paths, particles can collide. If the collisions happen to take place in a region of the ring where the CMS detector is set up, the detector can record the collisions, and scientists can then analyze the aftermath to reconstruct the fragments that make up the original particles.
Since the LHC began operations in 2008, the focus has been overwhelmingly on the detection and analysis of “head-on” collisions. Physicists have known that by accelerating particle beams, they would also produce photonuclear interactions — near-miss events where a particle might collide not with another particle, but with its cloud of photons. But such light-nucleus interactions were thought to be simply noise.
“These photonuclear events were considered a background that people wanted to cancel,” Innocenti says. “But now people want to use it as a signal because a collision between a photon and a nucleus can essentially be like a super-high-accuracy microscope for nuclear matter.”
When a photon pings off a particle, the abundance, direction, and energy of the produced D0 meson relates directly to the energy and density of the gluons in the nucleus. If scientists can detect and measure this photon interaction, it would be like using an extremely small and powerful flashlight to illuminate the nuclear structures. But until now, it was assumed that photonuclear interactions would be impossible to pick out amid the various physics processes that can occur in such collisions.
“People didn’t think it was possible to remove the huge mess of all these other collisions, to zoom in on single photons hitting single nuclei producing a D0 meson,” Innocenti says. “We had to devise a system to recognize those very rare photonuclear interactions while data was being taken of particle collisions.”
Illuminating charm
For their new study, Innocenti and his colleagues first simulated what a photonuclear interaction would look like amid a shower of other particle collisions. In particular, they simulated a scenario in which a photon pings off a nucleus and produces a D0 meson. Although these events are rare, D0 mesons are among the most abundant particles that contain a charm quark. The team reasoned that if they could detect signs of a charm quark in D0 mesons that are produced in a photonuclear interaction, it could give valuable information about the gluons that hold the nucleus together.
With their simulations, the researchers then developed an algorithm to detect photonuclear interactions. They implemented the algorithm at the CMS detector to search for signals in real-time during the LHC’s particle-colliding runs.
“We had to collect tens of billions of collisions in order to extract a few hundred of these rare instances where a photon hits a nucleus and produces one of these exotic D0 meson particles,” Innocenti explains.
From this enormous dataset, the team identified a clean sample of these rare events by exploiting CMS’s advanced detector capabilities to select near-miss events and reconstruct the properties of the D0 mesons.
Through this process, the team detected instances of D0 meson production and then worked back to calculate properties of the particles’ charm quarks and the gluons that would have held them together in the original nucleus.
“We are constraining what happens to gluons when they are squeezed in ions that are very large that are traveling very fast,” Innocenti says. “So far, our data confirms what people expect in terms of high-density nuclear matter. In reality, this is the first time we’ve shown this kind of measurement is feasible. ”
The team is working to improve the measurement’s accuracy in order to provide a clearer picture of how quarks and gluons are arranged inside a nucleus.
“Gluons are a very strong force that keeps the universe together,” Innocenti says. “The description of the strong force is at the basis of everything we see in nature. Now we have a way to either fully confirm, or show deviations from, that description.”
This work was supported, in part, by the U.S. Department of Energy, including support from a DOE Early Career Research Program award, and it builds on the contributions of a large MIT team of graduate students, undergraduate researchers, scientists, and postdocs.
QS World University Rankings rates MIT No. 1 in 12 subjects for 2026The Institute also ranks second in seven subject areas.QS World University Rankings has placed MIT in the No. 1 spot in 12 subject areas for 2026, the organization announced today.
The Institute received a No. 1 ranking in the following QS subject areas: Chemical Engineering; Chemistry; Civil and Structural Engineering; Computer Science and Information Systems; Data Science and Artificial Intelligence; Electrical and Electronic Engineering; Engineering and Technology; Linguistics; Materials Science; Mechanical, Aeronautical, and Manufacturing Engineering; Mathematics; and Physics and Astronomy.
MIT also placed second in seven subject areas: Architecture/Built Environment; History of Art; Biological Sciences; Economics and Econometrics; Marketing; Natural Sciences; and Statistics and Operational Research.
For 2026, universities were evaluated in 55 specific subjects and five broader subject areas.
Quacquarelli Symonds Limited subject rankings, published annually, are designed to help prospective students find the leading schools in their field of interest. Rankings are based on research quality and accomplishments, academic reputation, and graduate employment.
MIT has been ranked as the No. 1 university in the world by QS World University Rankings for 14 straight years.
Alex Tang’s dream of becoming a physician started in grade school when he read Lisa Sanders’ “Diagnosis” column in The New York Times Magazine. Although he often encountered unfamiliar medical terms, Tang was captivated by the magic of medicine, as Sanders described how physicians turned puzzling sets of symptoms into concrete diagnoses and treatment plans for patients.
A decade later, Tang is one step closer to achieving his dream. The MIT senior has challenged himself academically, dual-majoring in chemistry and biology and minoring in biomedical engineering. “All of the courses have encouraged me to think about problems through different lenses,” he says.
Tang has also challenged himself as the editor-in-chief of MIT’s student newspaper, The Tech, and as a competitive triathlete. In the fall, he will begin medical school, where he hopes to develop clinical skills and continue honing his scientific abilities. Ultimately, he aspires to pursue a career as a physician-scientist, focusing on how cancers respond to and resist treatment. He wants to help convert those insights into novel therapies that can be tailored to individual cancer patients.
“I want to advance precision oncology, ensuring that each patient receives the most effective, personalized treatment possible,” he says.
Thriving in the lab
Originally from Massachusetts, Tang was eager to make the most of his MIT experience, especially because of its extensive research opportunities. “Both my parents worked in the Cambridge biotech space, and being able to contribute to innovative science here has been a priority,” he says.
Early on, Tang gravitated toward oncology after joining the Nir Hacohen Lab at the Broad Institute, an interest cemented after taking 7.45 (Cancer Biology), which was taught by professors Tyler Jacks and Michael Hemann. Fascinated by how new cancer therapies were changing patients’ lives, he joined a project with implications for patients with difficult prognoses: For the last three-and-half years, Tang has been studying the effects of combined immunotherapy and targeted molecular therapy on tumors in patients with metastatic colorectal cancer.
“I hope my work can provide clarity for patients and physicians, and empower them to be confident in their options for care,” Tang says.
Last year, Tang was awarded a prestigious Goldwater Scholarship, which supports undergraduates who go on to become leading scientists, engineers, and mathematicians in their respective fields.
In addition to gaining technical skills, Tang has found working in the Hacohen Lab to be enriching in other important ways.
“What’s been great about research is learning from experts in the field who become your role models,” he says, “They are at the frontiers of investigating the most challenging questions in the field, and iterating through the scientific process with them is such a joy.”
Looking forward to medical school, he hopes to complement his basic science research with work that is more clinically involved.
“I want to bridge the gap between fundamental discoveries and tangible improvements in patient care,” Tang says. He has already set out on this mission, recently leading the development of a prognostic assay in lung cancer.
Breaking news
After stopping by the booth for MIT’s student newspaper, The Tech, during Campus Preview Weekend, Tang knew he wanted to join and contribute to a publication that has long chronicled MIT’s history and culture. Starting as a news writer and later serving as editor-in-chief, he learned how to write under pressure, reported on major campus events, and balanced leadership with collaboration.
“It’s been such an honor and pleasure to document people across the diverse MIT community who are all contributing to the character of the Institute in different ways,” he says.
It’s an activity he’ll drop everything for.
“When we have things come up and we have to do a breaking news story or we have some editorial thing that needs to be managed, I’ll just stop working to sort out whatever’s happening,” he says. “I think that’s what passion really is about.”
His journey with The Tech has not always been easy. In the summer between his first and second year, he found himself solely responsible for producing the paper’s news content amidst a staff shortage while the paper was facing financial difficulties.
“Coming into sophomore fall, I focused on recruiting more staff and seeking out ways to get more funding,” Tang says. “The paper wouldn’t be here without the people, both students and faculty advisors alike, who bought into The Tech’s mission.”
Though he hopes to pursue a career in medicine, Tang has found journalism to be integral in shaping how he will connect and communicate with patients and colleagues.
“You are responsible for taking someone’s story, breaking it down, and retelling it in your own words in a way that you feel would resonate with the audience and serve the community,” he says.
An outlet through triathlon
Despite his busy schedule, Tang prioritizes staying active and maintaining fitness. A former competitive swimmer in high school and now a triathlete, he still finds himself drawn back to the water when everything around him feels fast-paced.
“Swimming, biking, and running are good ways to de-stress,” Tang says. “It’s therapeutic in the sense that you can just let go. The race is just that culmination of letting it go at a more elevated level.”
He credits MIT’s infrastructure for helping him stay committed to training. “My dorm is steps away from the pool and the track,” he says. “The convenience is superb.”
Tang has found success in competitions, most recently placing third in his age group at the 2025 Boston Triathlon. In fact, it is the feeling of accomplishment that pushes him every day.
“There are many days when you want to take it easy, but you have to remember the joy waiting for you at the end of the race when you’ve put in the work,” he says. “It motivates me to be conscious and aware of what I’m doing in practice.”
During the summer, Tang and his younger brother go out for long runs in the Boston suburbs. “It is great to have my brother push me every day,” Tang says. “There has been no one more supportive of me than my family.”
MIT hosts its first High School Regional Science BowlAt a daylong science competition, high school students gathered from across New England to test their science knowledge for a shot at nationals in Washington.“Guys, have the buzzers been tested?”
On Saturday, Feb. 21, volunteers for the 2026 MIT Science Bowl High School Regional hustled around the spacious auditorium, setting up chairs and buzzers and laying out sharpened pencils. The room slowly quieted as all high schoolers filed in, dressed in matching, dark green Science Bowl T-shirts.
By late afternoon, after rounds and rounds of fast-paced questioning, the auditorium pulsed with tension and anxiously bouncing knees as the final seconds of the competition ticked down.
“Patients with Tay–Sachs disease —” began the moderator, Gideon Tzafriri, president of the Science Bowl and a senior at MIT.
A buzzer cut him off.
“Interrupt,” Tzafriri announced.
The entire audience seemed to hold their breath. A student from Lexington High School Team 1 offered their answer: “lysosome.”
“Correct.”
Moments later, the Lexington, Massachusetts, team sealed the match. The room erupted into cheers, with students vaulting from their seats and rushing down to hug and congratulate their teammates. The final score of the 2026 MIT Science Bowl was 148 to 52, with Lexington High School Team 1 winning against Philips Exeter Team 1.
“I think I can speak for all of us when I say we feel ecstatic,” said Jerry Xu, one of the members of the winning team. “It’s been a long-term collaborative effort, we’ve been practicing for many years. We’ve worked together as a team for so long, it’s just such a great feeling to be here with my friends.”
Around Xu, the rest of his teammates proudly nodded.
The 2026 MIT High School Regional Science Bowl marked the Institute’s first time hosting a regional competition, expanding its long-running involvement with the national tournament. While MIT has hosted the national high school competition for eight years, this regional event created a new qualifying pathway for New England schools vying for a place at the National Science Bowl in Washington.
The competition involves round-robin style questions on complex biology, chemistry, and physics questions, and some topics lie well beyond the scope of regular high school classes. In a long day of tough science questions and rapidly beeping buzzers, the event had brought together 26 teams from 14 schools across Massachusetts, New Hampshire, and Rhode Island.
“The whole team put immense effort into learning about science, enjoying themselves, having fun, and trusting the process,” said Nicholas Gould, the Lexington High School team’s coach and their physics teacher. “It’s not about the win, it’s the process of getting there, the experiences they take with them and what they learn about themselves and each other.”
For many competitors, the draw wasn’t just the chance to win a medal, but to further their knowledge.
“I came here because I wanted to be on a science team just because I like science, and my experience has been pretty amazing,” said Vritti Mehra, a student at Portsmouth High School in New Hampshire.
Others spoke of the importance of representation.
“I’m proud to be a girl in this tournament because as you can see, there are not a lot of females here. But I’m very glad that I’m part of this community because of the friendliness, the competition, and this fostered a love for science for me,” said Katherine Wang, from Lexington High School Team 3, who has been competing since sixth grade. “My mom has a PhD, so she really inspires me to become the best.”
The regional marked a beginning to MIT, and an end for many graduating seniors, both competitors and volunteers.
“Most of us have been doing Science Bowl since middle school, so this feels like a culmination of everything we’ve done,” said William Jung, another member of the winning team.
For Tzafriri, the president of the bowl, the event carried a similar resonance, since he also competed in the event himself when he was in high school.
“It’s nice to finally finish off something that I started in high school,” said Tzafriri.
As the event came to an end, the winning team lined up at the front of the auditorium, with proud grins and the golden medals around their necks glistening under fluorescent lights. Cameras flashed in quick succession as the event’s organizers and volunteers watched proudly from either side.
“I get to help kids have fun with science and actively participate in science,” said Jiaxing Wang, one of the event’s organizers. “The Science Bowl is something I discovered in my junior year of high school: It was very late in the cycle, so I want to be able to help kids like me to compete and have the experience they deserve and desire.”
For Lexington’s seniors, this event sends them to Washington. For MIT, it signals something larger: a continuous investment into young scientists, encouraging a future full of possibility.
A complicated future for a methane-cleansing moleculeA new model shows how levels of the “atmosphere’s detergent” may rise and fall in response to climate change.Methane is a powerful greenhouse gas that is second only to carbon dioxide in driving up global temperatures. But it doesn’t linger in the atmosphere for long thanks to molecules called hydroxyl radicals, which are known as the “atmosphere’s detergent” for their ability to break down methane. As the planet warms, however, it’s unclear how the air-cleaning agents will respond.
MIT scientists are now shedding some light on this. The team has developed a new model to study different processes that control how levels of hydroxyl radical will shift with warming temperatures.
They find that the picture is complicated. As temperatures increase, so too will water vapor in the atmosphere, which will in turn boost the molecule’s concentrations. But rising temperatures will also increase “biogenic volatile organic compound emissions” — gases that are naturally released by some plants and trees. These natural emissions can reduce hydroxyl radical and dampen water vapor’s boosting effect.
Specifically, the team finds that if the planet’s average temperatures rise by 2 degrees Celsius, the accompanying rise in water vapor will increase hydroxyl radical levels by about 9 percent. But the corresponding increase in biogenic emissions would in turn bring down hydroxyl radical levels by 6 percent. The final accounting could mean a small boost, of about 3 percent, in the atmosphere’s ability to break down methane and other chemical compounds as the planet warms.
“Hydroxyl radicals are important in determining the lifetime of methane and other reactive greenhouse gases, as well as gases that affect public health, including ozone and certain other air pollutants,” says study author Qindan Zhu, who led the work as a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS).
“There’s a whole range of environmental reasons why we want to understand what’s going on with this molecule,” adds Arlene Fiore, the Peter H. Stone and Paola Malanotte Stone Professor in EAPS. “We want to make sure it’s around to chemically remove all these gases and pollutants.”
Fiore and Zhu’s new study appears today in the Journal of Advances in Modeling Earth Systems (JAMES). The study’s MIT co-authors include Jian Guan and Paolo Giani, along with Robert Pincus, Nicole Neumann, George Milly, and Clare Singer of Lamont-Doherty Earth Observatory and the Columbia Climate School, and Brian Medeiros at the National Center for Atmospheric Research.
A natural neutralizer
The hydroxyl radical, known chemically as OH, is made up of one oxygen atom and one hydrogen atom, along with an unpaired electron. This configuration makes the molecule extremely reactive. Like a chemical vacuum cleaner, OH easily pulls an electron or hydrogen atom away from other molecules, breaking them down into weaker, more water-soluble forms. In this way, OH reduces a vast range of chemicals, including some air pollutants, pathogens, and ozone. And changes in OH are a powerful lever on methane.
“For methane, the reaction with OH is considered the most important loss pathway,” Zhu says. “About 90 percent of the methane that’s removed from the atmosphere is due to the reaction with OH.”
Indeed, it’s thanks to reactions with hydroxyl radical that methane can only stick around in the atmosphere for about a decade — far shorter than carbon dioxide, which can linger for 1,000 years or longer. But even as OH breaks down methane already in the atmosphere, more methane continues to accumulate. Rising methane concentrations, in addition to human-derived emissions of carbon dioxide, are driving global warming, and it’s unclear how OH’s methane-clearing power will keep up.
“The questions we’re exploring here are: What are the main processes that control OH concentrations? And how will OH respond to climate change?” Fiore says.
An aquaplanet’s air
For their study, the researchers developed a new model to simulate levels of OH in the atmosphere under a current global climate scenario, compared to a future warmer climate. Their model, dubbed “AquaChem,” is an expansion of a simplified model that is part of a suite of tools developed by the Community Earth System Model (CESM) project. The model that the team chose to build off is one that represents the Earth as a simplified “aquaplanet,” with an entirely ocean-covered surface.
Aquaplanet models allow scientists to study detailed interactions in the atmosphere in response to changes in surface temperatures, without having to also spend computing time and energy on simulating complex dynamics between the land, water, and polar ice caps.
To the aquaplanet model, Zhu added an atmospheric chemistry component that simulates detailed chemical reactions in the atmosphere consistent with the applied surface temperatures. The chemical reactions that she modeled represent those that are known to affect OH concentrations.
OH is primarily produced when ozone interacts with sunlight in the presence of water vapor. For instance, scientists have found that OH levels can vary depending certain anthropogenic and natural emissions, all of which Zhu incorporated separately and together into the AquaChem model in order to isolate the impact of each process on OH.
The emissions in particular include carbon monoxide, methane, nitrogen oxides, and volatile organic compounds (VOCs), some of which are emitted through human practices, and others that are given off by natural processes. One type of naturally-derived VOCs are “biogenic” emissions — gases, such as isoprene, that some plants and trees emit through tiny pores called stomata during transpiration.
Into the AquaChem model, Zhu plugged in data that were available for each type of emissions from the year 2000 — a year that is generally considered to represent the current climate in a simplified form. She set the aquaplanet’s sea surface temperatures to the zonal annual mean of that year, and found that the model accurately reproduced the major sensitivities of OH chemistry to the underlying chemical processing as simulated in a more complex chemistry-climate model.
Then, Zhu ran the model under a second, globally warming scenario. She set the planet’s sea surface temperatures to warm by 2 degrees Celsius (a warming that is likely to occur unless global anthropogenic carbon emissions are mitigated). The team looked at how this warming would affect the various types of emissions and chemical processes, and how these changes would ultimately affect levels of OH in the atmosphere.
In the end, they found the two biggest drivers of OH levels were rising water vapor and biogenic emissions. They found that global warming would increase the amount of water vapor to the atmosphere, which in turn would boost production of OH by 9 percent. However, this same degree of warming would also increase biogenic emissions such as isoprene, which reacts with and breaks down OH, bringing down its levels by 6 percent.
The team recognizes that there are many other factors that affect the response of isoprene emissions to surface warming. Rising CO2, not considered in this study, may dampen this temperature-driven response. Of all the factors that can shift OH levels under global warming, the researchers caution that biogenic emissions are the most uncertain, even though they appear to have a large influence. Going forward, the scientists plan to update AquaChem to continue studying how biogenic emissions, as well as other processes and climate scenarios, could sway OH concentrations.
“We know that changes in atmospheric OH, even of a few percent, can actually matter for interpreting how methane might accumulate in the atmosphere,” Zhu says. “Understanding future trends of OH will allow us to determine future trends of methane.”
This work was supported, in part, by Spark Climate Solutions and the National Oceanic and Atmospheric Administration.
CryoPRISM: A new tool for observing cellular machinery in a more natural environmentThe method allows researchers to observe biomolecular complexes in a quick, accurate, and budget-friendly way, providing new insights into bacterial protein synthesis.The blobfish, once considered the ugliest animal in the world, has since had quite the redemption arc. Years after it was first discovered, scientists realized that the deep-sea creature appeared so unnervingly blobby only because it went through an extreme change in pressure when it was brought up to the surface. In its natural environment, 4,000 feet underwater, the fish looks perfectly handsome.
Structural biologists, whose goal is to deduce a molecule’s structure and function within a cell, face the risk of making a similar mistake. If biomolecular complexes are extracted from the cell, better-quality images can be obtained, but the molecules may not look natural. On the other hand, studying molecules without disrupting their environment at all is technically challenging, like filming deep underwater.
A new method, called purification-free ribosome imaging from subcellular mixtures (cryoPRISM), offers an appealing compromise. Developed by graduate students Mira May and Gabriela López-Pérez in the Davis lab in the MIT Department of Biology and recently published in PNAS, the technique allows biologists to visualize molecular complexes without taking them too far out of their natural context.
CryoPRISM captures molecular structures in cells that have just been broken open. This comes as close to preserving the natural interactions between molecules as possible, short of the extremely resource-intensive in-cell structural imaging, according to associate professor of biology Joey Davis, the faculty lead of the study.
“We think that the cryoPRISM method is a sweet spot where we preserve much of the native cellular contacts, but still have the resolution that lets us actually see molecular details,” Davis says. “Even in the extremely well-trodden system of translation in E. coli, which people have worked on for over 50 years, we are still finding new states that had just escaped people’s attention.”
A negative control that was not so negative
The development of cryoPRISM, as many discoveries in science, resulted from an unexpected observation that Mira May, the co-first author of the study, made while working on a different project.
Like all living organisms, bacteria rely on a process called translation to manufacture the proteins that carry out essential functions within the cell, from copying DNA to digesting nutrients. A key machine involved in translation is the ribosome — a biomolecular complex that assembles proteins based on instructions encoded by another molecule called mRNA. To regulate its activity, cells employ additional proteins that can change the shape of the ribosome, thus guiding its function.
May sought to identify new players in ribosomal regulation using cryoEM, by rapidly freezing lots of purified molecules and collecting thousands of 2D images to reconstruct their 3D structures. May was trying to pull ribosomes out of cells to visualize them together with their regulators. For her experiments, she designed a negative control containing unpurified bacterial lysate — a mixture of everything spilled from burst cells.
May expected to get noisy, low-quality images from this sample. To her surprise, instead, she saw intact ribosomes together with their natural interacting partners.
In just a few days, this technique experimentally validated data that would have taken months to acquire using other approaches.
“As I found more and more ribosomal states, this project became a method, not just a one-off finding,” May recalls.
Discovering new biology in a saturated field
Once May and her colleagues were confident that cryoPRISM could detect known ribosomal states, they began searching for ones that had previously escaped detection.
“It’s not just that we can recapitulate things that have been previously observed, but we can actually also discover novel ribosomal biology,” May says.
One of the novel states May identified has important implications for our understanding of the evolution of translation regulation.
During active translation, bacterial ribosomes are accompanied by a group of helper proteins called elongation factors. These factors bring in the materials for protein synthesis, like tRNAs and amino acids.
When cells encounter unfavorable conditions, such as colder temperatures, they reduce translation, which means that many ribosomes are out of work. These idle, hibernating ribosomes stop decoding mRNA, and the interface where they usually interact with helper molecules gets blocked by a hibernation factor called RaiA. This protein helps idle ribosomes avoid reactivation, like a sleeping mask that prevents a person from being woken up by light.
May observed the idle ribosomal state in her data, which on its own did not surprise her – this state had been described before. What surprised her was that some inactive ribosomes were interacting not only with RaiA, but also with an elongation factor called EF-G, which in bacteria was previously believed to only interact with active ribosomes.
A similar phenomenon has been seen before in more complex organisms, but observing it in a microbe suggests that its evolutionary origin may be older than previously thought.
“It fits an emerging model in the field, that elongation factors might bind to hibernating ribosomes to protect both the ribosome and themselves from degradation during periods of stress,” May explains. “Think of it like short-term storage.”
An unstressed cell might quickly eliminate unneeded inactive ribosomes, but because any stressor that puts ribosomes to sleep could be temporary, the cell may prefer to hold off on destroying them. That way, the ribosomes can be quickly reactivated if conditions improve.
The future of cryoPRISM
May has already teamed up with other MIT researchers to use cryoPRISM to visualize ribosomes in cells that are notoriously difficult to work with, including pathogenic organisms, which can be challenging to culture at the scale required for particle purification, and red blood cells isolated from patients, which cannot be cultured at all.
Besides its immediate application for translation research, cryoPRISM is a stepping stone toward the broader goal of structural biology: studying biomolecules in their natural environment.
To truly learn about deep-sea fish, scientists need to look at them in the deep sea; and to learn about cellular machines, scientists need to look at them in cells. According to Davis, cryoPRISM perfectly fits into the “theme of structural biology moving closer and closer to cellular context.”
This work was carried out, in part, with the use of MIT.nano facilities.
After 16 years leading Picower Institute, Li-Huei Tsai will sharpen focus on research, teachingTsai, who has grown the MIT neuroscience institute, will increase focus on research including Alzheimer’s disease and Down syndrome.MIT Picower Professor Li-Huei Tsai, who has led The Picower Institute for Learning and Memory since 2009, will step down from the role of director at the end of the academic year in May. Her decision frees her to focus exclusively on her academic work, including her continued leadership of MIT’s Aging Brain Initiative and the Alana Down Syndrome Center. Meanwhile, the search for the Picower Institute’s next director has begun.
“During her exceptional 16-year tenure in the role of director, Li-Huei has led substantial growth at the Picower Institute,” says Nergis Mavalvala, dean of the MIT School of Science and the Curtis and Kathleen Marble professor of astrophysics. “She has markedly expanded the faculty — eight of the current 16 labs joined Picower under her directorship — through successful recruitment of highly talented neuroscientists. She has done this, and more, all while leading one of our most productive and influential labs, working on a quintessentially grand challenge in human health: combating Alzheimer’s disease.”
To conduct the search for a new Picower Institute director, Mavalvala has appointed a committee led by Sherman Fairchild Professor Matthew Wilson, associate director of the institute. Serving with Wilson are Picower Professor and former institute director Mark Bear, Menicon Professor Troy Littleton, Assistant Professor Sara Prescott, and Professor Fan Wang. They will identify and interview candidates, producing a report to Mavalvala later this spring.
Growing an institute
Tsai, a professor in MIT’s Department of Brain and Cognitive Sciences and a member of The Broad Institute of MIT and Harvard, says she is grateful to have had the opportunity to build the Picower Institute into a preeminent center for neuroscience research.
“I’m immensely proud of what our institute represents: world-renowned neuroscience research that is creative, rigorous, novel, and impactful,” Tsai says. “Our labs produce innovations, discoveries, and often translational strategies that have broken new ground and pushed science, medicine, and technology forward. We also provide excellent training that has enabled us to launch the careers of many of the field’s new and future leaders. It has been a tremendous honor to be able to build on the incredible foundation and inspiration provided by my predecessors Susumu Tonegawa and Mark Bear to enable the institute’s growth and success.”
Founded by Tonegawa as the Center for Learning and Memory in 1994, and then renamed The Picower Institute for Learning and Memory after a transformative gift by Barbara and Jeffry Picower in 2002, the institute now comprises about 400 scientists, students, and staff across 16 labs in MIT’s buildings 46 and 68.
But when Tsai became director in July 2009, just three years after coming to MIT from Harvard Medical School, the Picower Institute was a smaller enterprise of 11 labs, and a community closer to 200 members. Over the ensuing years, Tsai worked closely with the Picowers’ foundation, formerly the JPB Foundation and now the Freedom Together Foundation, to develop several strategic initiatives to accelerate growth and enhance research productivity. These have included programs specifically designed to support junior faculty, to catalyze more applications for more private grant funding, and to sustain fellowships for more than 18 postdocs and graduate students. Working with the foundation, she has also expanded the scope of research support provided by the Picower Institute Innovation Fund begun under Bear.
Eager to galvanize colleagues across MIT in fighting neurodegenerative diseases and neurological disorders affecting cognition, Tsai also built and launched two campus-wide initiatives: The Aging Brain Initiative, founded in 2015 and sustained by a broad coalition of donors, and the Alana Down Syndrome Center, established in 2019 with a gift from The Alana Foundation.
Research focus
As the Picower Institute has grown, Tsai’s research has, too. In work spanning molecular, cellular, circuit, and network scales in the brain, Tsai has led numerous highly cited discoveries about the neurobiology of Alzheimer’s disease and has translated several of these insights into specific therapeutic strategies, including one now undergoing a national phase III clinical trial. In all, she has published more than 230 peer-reviewed neuroscience studies, generated numerous patents, and helped launch several startups. She has been named a fellow of the National Academy of Medicine, the American Academy of Arts and Sciences, and the National Academy of Inventors, and received awards including the Society for Neuroscience Mika Salpeter Lifetime Achievement Award and the Hans Wigzell Prize.
Tsai’s earliest discoveries identified key roles in neurodegeneration for the enzyme CDK5. She has pioneered understanding of how epigenetic changes in brain cells affect Alzheimer’s pathology and memory. Her work has also highlighted a critical role for DNA double-strand breaks in disease.
In more recent work, Tsai’s lab has conducted several studies using innovative human stem-cell-based cultures to advance understanding of how the biggest genetic risk factor for Alzheimer’s (a gene variant called APOE4) contributes to pathology, and how some existing medications and supplements might help. In collaboration with MIT professor of computer science Manolis Kellis, she has also published several sweeping atlases documenting how gene expression and epigenetics differ in Alzheimer’s disease. These studies have provided the field with troves of new data and have yielded new insights into what makes the brain vulnerable to disease, and what helps some people remain resilient.
Tsai has also led a collaboration with professors Emery N. Brown and Edward S. Boyden that’s discovered a potential noninvasive, device-based treatment for Alzheimer’s and possibly other neurological disorders. Called “Gamma Entrainment Using Sensory Stimuli” (GENUS), the technology stimulates the senses (vision, hearing, or touch) to increase the power and synchrony of 40Hz frequency “gamma” waves in the brain. Numerous studies, involving either lab animals or human volunteers by her group and others, have shown that the approach can preserve brain volume and learning and memory and reduce signs of Alzheimer’s pathology. Via an MIT spinoff company, the technology has now advanced to pivotal clinical trial enrolling hundreds of people around the country.
“After 16 years leading the Picower Institute, I’m now eager to sharpen my focus on advancing human health through the work in my lab, the Aging Brain Initiative, and the Alana Center,” Tsai says.
New model predicts how mosquitoes will fly Their flight patterns change in response to different sensory cues, a new study finds. The work could lead to more effective traps and mosquito control strategies.A mosquito finds its target with the help of certain cues in its environment, such as a person’s silhouette and the carbon dioxide they exhale.
Now researchers at MIT and Georgia Tech have found that these visual and chemical cues help determine the insects’ flight paths. The team has developed the first three-dimensional model of mosquito flight, based on experiments with mosquitoes flying in the presence of different sensory cues.
Their model, reported today in the journal Science Advances, identifies three flight patterns that mosquitoes exhibit in response to sensory stimuli.
When they can only see a potential target, mosquitoes take a “fly-by” approach, quickly diving in toward the target, then flying back out if they do not detect any other host-confirming cues.
When they can’t see a target but can smell a chemical cue such as carbon dioxide, mosquitoes will do “double-takes,” slowing down and flitting back and forth to keep close to the source.
Interestingly, when mosquitoes receive both visual and chemical cues, such as seeing a silhouette and smelling carbon dioxide, they switch to an “orbiting” pattern, flying around a target at a steady speed as they prepare to land, much like a shark circling its prey.
The researchers say the new model can be used to predict how mosquitoes will fly in response to other cues, such as heat, humidity, and certain odors. Such predictions could help to design more effective traps and mosquito control strategies.
“Our work suggests that mosquito traps need specifically calibrated, multisensory lures to keep mosquitoes engaged long enough to be captured,” says study author Jörn Dunkel, MathWorks Professor of Mathematics at MIT. “We hope this establishes a new paradigm for studying pest behavior by using 3D tracking and data-driven modeling to decode their movement and solve major public health challenges.”
The study’s MIT co-authors are Chenyi Fei, a postdoc in MIT’s Department of Mathematics, and Alexander Cohen PhD ’26, a recent MIT chemical engineering PhD student advised by Dunkel and Professor Martin Bazant, along with Christopher Zuo, Soohwan Kim, and David L. Hu ’01, PhD ’06 of Georgia Tech, and Ring Carde of the University of California at Riverside.
Flight by numbers
Mosquitoes are considered to be the most dangerous animals in the world, given their collective impact on human health. The blood-sucking insects transmit malaria, dengue fever, West Nile virus, and other deadly diseases that together cause over 770,000 deaths each year.
Of the 3,500 known species of mosquitoes, around 100 have evolved to specifically target humans, including Aedes aegypti, a species that uses a variety of cues to seek out human hosts. Scientists have studied how certain cues attract mosquitoes, mainly by setting up experiments in wind tunnels, where they can waft cues such as carbon dioxide and study how mosquitoes respond. Such experiments have mainly recorded data such as where and when the insects land. The researchers say no study has explored how mosquitoes fly as they hunt for a host.
“The big question was: How do mosquitoes find a human target?” says Fei. “There were previous experimental studies on what kind of cues might be important. But nothing has been especially quantitative.”
At MIT, Dunkel’s group develops mathematical models to describe and predict the behavior of complex living systems, such as how worms untangle, how starfish embryos develop and swim, and how microbes evolve their community structure over time.
Dunkel looked to apply similar quantitative techniques to predict flight patterns of mosquitoes after giving a talk at Georgia Tech. David Hu, a former MIT graduate student who is now a professor of mechanical engineering at Georgia Tech, proposed a collaboration; Hu’s lab was carrying out experiments with mosquitoes at a facility at the Centers of Disease Control and Prevention in Atlanta, where they were studying the insects’ behavior in response to sensory cues. Could Dunkel’s group use the collected data to identify significant flight behavior that could ultimately help scientists control mosquito populations?
“One of the original motivations was designing better traps for mosquitoes,” says Cohen. “Figuring out how they fly around a human gives insights on how we can avoid them.”
Taking cues
For their new study, Hu and his colleagues at Georgia Tech carried out experiments with 50 to 100 mosquitoes of the Aedes aegypti species. The insects flew around inside a long, white, slightly angled rectangular room as cameras around the room captured detailed three-dimensional trajectories of each mosquito as it flew around. In the center of the room, they placed an object to represent a certain visual or chemical cue.
In some trials, they placed a black Styrofoam sphere on a stand to represent a simple visual cue. (Mosquitoes would be able to see the black sphere against the room’s white background). In other trials, they set up a white sphere with a tube running through to pump out carbon dioxide at rates similar to what humans breathe out. These trials represented the presence of a chemical cue, but not a visual cue.
The researchers also studied the mosquitoes’ response to both visual and chemical cues, using a black sphere that emitted carbon dioxide. Finally, they observed how mosquitoes behaved around a human volunteer who wore protective clothing that was black on one side and white on the other.
Across 20 experiments, the team generated more than 53 million data points and over 477,220 mosquito flight paths. Hu shared the data with Dunkel, whose group used the measurements to develop a model for mosquito flight behavior.
“We are proposing a very broad range of dynamical equations, and when you start out, the equation to predict a mosquito’s flight path is very complicated, with a lot of terms, including the relative importance of a visual versus a chemical cue,” Dunkel explains. “Then through iteration against data, we reduce the complexity of that equation until we get the simplest model that still agrees with the data.”
In the end, the group whittled down a simple model that accurately predicts how a mosquito will fly, given the presence of a visual cue, a chemical cue, or both. The flight paths in response to one or the other cue are markedly different. And interestingly, when both cues are present, the researchers noted that the resulting path is not “additive.” In other words, a mosquito does not simply combine the paths that it would separately take when it can both see and smell a target. Instead, the insects take a distinct path, circling, rather than diving or darting around their target.
“Our work suggests that mosquito traps need specifically calibrated ‘multisensory’ lures to keep mosquitoes engaged long enough to be captured,” Dunkel says.
“Obviously there are additional cues that humans emit, like odor, heat, and humidity,” Cohen notes. “For the species we study, visual and carbon dioxide cues are the most important. But we can apply this model to study different species and how they respond to other sensory cues.”
The researchers have developed an interactive app that incorporates the new mosquito flight model. Users can experiment with different objects and set parameters such as the number of mosquitoes around the object and the type of sensory cue that is present. The model then visualizes how the mosquitoes would fly in response.
“The original hope was to have a quantitative model that can simulate mosquito behavior around various trap designs,” Cohen says. “Now that we have a model, we can start to design more intelligent traps.”
This work was supported, in part, by the National Science Foundation, Schmidt Sciences, LLC, the NDSEG Fellowship Program, and the MIT MathWorks Professorship Fund.
Pursuing a passion for public healthMIT senior Srihitha Dasari reflects on the power of experiential learning through the PKG Center for Social Impact.MIT senior Srihitha Dasari never imagined she would be speaking in front of the United Nations about health care, technology, and the power of co-designing public health interventions in collaboration with impacted communities.
But when she stepped up to the podium to speak about digital well-being and community-centered health care design, she carried with her more than research findings. She brought several years of experiential learning in public health environments, ranging from visiting exam rooms of New England’s largest safety net hospital to collaborating with nurses in rural Argentina and working on maternal health in India and Nepal.
Dasari arrived at MIT intending to major in brain and cognitive sciences and follow a pre-med track. Like many aspiring physicians, she pictured her MIT years filled with lab work, shadowing doctors, and preparing for medical school. Instead, during her first Independent Activities Period (IAP), she enrolled in the PKG Center for Social Impact’s IAP Health Program and began to broaden her understanding of practicing medicine.
“What was really incredible about IAP Health,” says Dasari, is that “I did it so early in not only my academic career, but just in the beginning of when I was actually formulating a lot of my career aspirations, [and] it really immersed me into what public health looks like.”
Through IAP Health, Dasari worked as an intern at the Boston Medical Center Autism Program. There, she provided in-clinic support to children with autism and their families, helping guide them through appointments and collaborating with physicians to adapt exam techniques to meet patients’ needs.
“When you think about how medicine is delivered, it can feel very systematic — like there are boxes you have to check,” she says. “But working in that clinic showed me … you can modify the experience to truly care for the whole person.”
The program exposed her not only to clinical care, but to the broader forces that shape health outcomes. “I didn’t envision myself doing public health when I entered college,” Dasari says. “But looking back, public health is the through line of everything I’ve done.”
She remained at Boston Medical Center as an intern for over a year with continued support and funding from the PKG Center’s Federal Work-Study and Social Impact Internship programs. The sustained engagement deepened her understanding of how health-care systems can either reinforce or reduce disparities — a systems-level perspective that carried into her global work.
During her second-year IAP, Dasari received a PKG Fellowship to develop an electronic health record system for a maternal ward in a rural hospital in Argentina. The project grew out of a relationship she developed through the student group MIT Global Health Alliance, which supports co-designing public health interventions with impacted communities.
Dasari’s collaboration with the hospital evolved into a social enterprise that she co-founded: PuntoSalud, an AI-powered chatbot designed to bridge health information gaps in rural Argentina. Dasari and her co-founders received a $5,000 award and seed funding to prototype and develop PuntoSalud through the PKG IDEAS Social Innovation Incubator, MIT’s only entrepreneurship program focused solely on social impact.
Speaking at the United Nations underscored a lesson she absorbed throughout her varied experience: Meaningful health innovation begins with relationships.
“I’ve been able to meet people from so many different facets of the health-care pipeline that I didn’t envision myself meeting,” Dasari says.
The mindset she developed through PKG programming has informed her experience beyond the center. Through MIT D-Lab, Dasari conducted maternal and neonatal health needs assessments in rural Nepal, interviewing community members to better understand gaps in care. The findings informed efforts to retrofit birthing centers with improved heating systems in cold climates. Later, supported by the MIT International Science and Technology Initiatives, she traveled to India to interview health-care providers about strategies to reduce non-medical cesarean section rates, with the goal of developing policy recommendations for other health systems.
“I came in thinking I would practice medicine one-on-one,” Dasari says. “Now I want to increase my impact in the health care field. I see that as clinical medicine intersected with public health, relieving health disparities for a wider population.”
As Dasari prepares to leave MIT for a year in clinical research, she does so with a systems lens on science and health care, and a commitment to social impact.
“The path I’ve taken in health care as an undergrad student has given me both a sense of purpose and fulfillment as I prepare to leave MIT,” she says. “It’s shown me that meaningful impact can begin long before medical school, and that I want to carry forward the values these experiences instilled in me.”
For Dasari, experiential learning didn’t redirect her ambitions, but enhanced them.
“I feel like the PKG Center … it’s not changing your goals,” she says. “It’s shaping them into their fullest potential.”
Brain circuit needed to incorporate new information may be linked to schizophreniaImpairments of this circuit may help to explain why some people with schizophrenia lose touch with reality.One of the symptoms of schizophrenia is difficulty incorporating new information about the world. This can lead people with schizophrenia to struggle with making decisions and, eventually, to lose touch with reality.
MIT neuroscientists have now identified a gene mutation that appears to give rise to this type of difficulty. In a study of mice, the researchers found that the mutated gene impairs the function of a brain circuit that is responsible for updating beliefs based on new input.
This mutation, in a gene called grin2a, was originally identified in a large-scale screen of patients with schizophrenia. The new study suggests that drugs targeting this brain circuit could help with some of the cognitive impairments seen in people with schizophrenia.
“If this circuit doesn’t work well, you cannot quickly integrate information,” says Guoping Feng, the James W. and Patricia T. Poitras Professor in Brain and Cognitive Sciences at MIT, a member of the Broad Institute of Harvard and MIT, and the associate director of the McGovern Institute for Brain Research at MIT. “We are quite confident this circuit is one of the mechanisms that contributes to the cognitive impairment that is a major part of the pathology of schizophrenia.”
Feng and Michael Halassa, a professor of psychiatry and neuroscience and director of translational research at Tufts University School of Medicine, are the senior authors of the new study, which appears today in Nature Neuroscience. Tingting Zhou, a research scientist at the McGovern Institute, and Yi-Yun Ho, a former MIT postdoc, are the lead authors of the paper.
Adapting to new information
Schizophrenia is known to have a strong genetic component. For the general population, the risk of developing the disease is about 1 percent, but that goes up to 10 percent for those who have a parent or sibling with the disease, and 50 percent for people who have an identical twin with the disease.
Researchers at the Stanley Center for Psychiatric Research at the Broad Institute have identified more than 100 gene variants linked to schizophrenia, using genome-wide association studies. However, many of those variants are located in non-coding regions of the genome, making it difficult to figure out how they might influence development of the disease.
More recently, researchers at the Stanley Center used a different strategy, known as whole-exome sequencing, to reveal gene mutations linked to schizophrenia. This technique sequences only the protein-coding regions of the genome, so it can reveal mutations that are located in known genes.
Using this approach on about 25,000 sequences from people with schizophrenia and 100,000 sequences from control subjects, the researchers identified 10 genes in which mutations significantly increase the risk of developing schizophrenia.
In the new Nature Neuroscience study, Feng and his students created a mouse model with a mutation in one of those genes, grin2a. This gene encodes a protein that forms part of the NMDA receptor — a receptor that is activated by the neurotransmitter glutamate and is often found on the surface of neurons.
Zhou then investigated whether these mice displayed any of the characteristic behaviors seen in people with schizophrenia. These individuals show many complex symptoms, including psychoses such as hallucinations and delusions (loss of contact with reality). Those are difficult to study in mice, but it is possible to study related symptoms such as difficulty in interpreting new sensory input.
Over the past two decades, schizophrenia researchers have hypothesized that psychosis may stem from an impaired ability to update beliefs based on new information.
“Our brain can form a prior belief of reality, and when sensory input comes into the brain, a neurotypical brain can use this new input to update the prior belief. This allows us to generate a new belief that’s close to what the reality is,” Zhou says. “What happens in schizophrenia patients is that they weigh too heavily on the prior belief. They don’t use as much current input to update what they believed before, so the new belief is detached from reality.”
To study this, Zhou designed an experiment that required mice to choose between two levers to press to earn a food reward. One lever was low-reward — mice had to push it six times to get one drop of milk. A high-reward lever dispensed three drops per push.
At the beginning of the study, all of the mice learned to prefer the high-reward lever. However, as the experiment went on, the number of presses required to dispense the higher reward gradually went up, while there were no changes to the low-reward lever.
As the effort required went up, healthy mice start to switch back and forth between the two levers. Once they had to press the high-reward lever around 18 times for three drops of milk, making the effort per drop about the same for each lever, they eventually switched permanently to the low-reward lever. However, mice with a mutation in grin2a showed a different behavior pattern. They spent more time switching back and forth between the two levers, and they made the switch to the low-reward side much later.
“We find that neurotypical animals make adaptive decisions in this changing environment,” Zhou says. “They can switch from the high-reward side to the low-reward side around the equal value point, while for the animals with the mutation, the switch happens much later. Their adaptive decision-making is much slower compared to the wild-type animals.”
An impaired circuit
Using functional ultrasound imaging and electrical recordings, the researchers found that the brain region affected most by the grin2a mutation was the mediodorsal thalamus. This part of the brain connects with the prefrontal cortex to form a thalamocortical circuit that is responsible for regulating cognitive functions such as executive control and decision-making.
The researchers found that neuronal activity in the mediodorsal thalamus appears to keep track of the changes in value of the two reward options. Additionally, the mice showed different patterns of neural activity depending on which state they were — either an exploratory state or committed to one side.
The researchers also showed that they could use optogenetics to reverse the behavioral symptoms of the mice with mutated grin2a. They engineered the neurons of the mediodorsal thalamus so that they could be activated by light, and when these neurons were activated, the mice began behaving similarly to mice without the grin2a mutation.
While only a very small percentage of schizophrenia patients have mutations in the grin2a gene, it’s possible that this circuit dysfunction is a converging mechanism of cognitive impairment for a subset of schizophrenia patients with different causes.
Targeting this circuit could offer a way to overcome some of the cognitive impairments seen in people with schizophrenia, the researchers say. To do that, they are now working on identifying targets within the circuit that could be potentially druggable.
The research was funded by the National Institutes of Mental Health, the Poitras Center for Psychiatric Disorders Research at MIT, the Yang Tan Collective at MIT, the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics at MIT, the Stelling Family Research Fund at MIT, the Stanley Center for Psychiatric Research, and the Brain and Behavior Research Foundation.
Three anesthesia drugs all have the same effect in the brain, MIT researchers findDiscovering this common mechanism could lead to a universal anesthesia-delivery system to monitor patients more effectively.When patients undergo general anesthesia, doctors can choose among several drugs. Although each of these drugs acts on neurons in different ways, they all lead to the same result: a disruption of the brain’s balance between stability and excitability, according to a new MIT study.
This disruption causes neural activity to become increasingly unstable, until the brain loses consciousness, the researchers found. The discovery of this common mechanism could make it easier to develop new technologies for monitoring patients while they are undergoing anesthesia.
“What’s exciting about that is the possibility of a universal anesthesia-delivery system that can measure this one signal and tell how unconscious you are, regardless of which drugs they’re using in the operating room,” says Earl Miller, the Picower Professor of Neuroscience and a member of MIT’s Picower Institute for Learning and Memory.
Miller, Emery Brown, who is the Edward Hood Taplin Professor of Medical Engineering and Computational Neuroscience, and their colleagues are now working on an automated control system for delivery of anesthesia drugs, which would measure the brain’s stability using EEG and then automatically adjust the drug dose. This could help doctors ensure that patients stay unconscious throughout surgery without becoming too deeply unconscious, which can have negative side effects following the procedure.
Miller and Ila Fiete, a professor of brain and cognitive sciences, the director of the K. Lisa Yang Integrative Computational Neuroscience Center (ICoN), and a member of MIT’s McGovern Institute for Brain Research, are the senior authors of the new study, which appears today in Cell Reports. MIT graduate student Adam Eisen is the paper’s lead author.
Destabilizing the brain
Exactly how anesthesia drugs cause the brain to lose consciousness has been a longstanding question in neuroscience. In 2024, a study from Miller’s and Fiete’s labs suggested that for propofol, the answer is that anesthesia works by disrupting the balance between stability and excitability in the brain.
When someone is awake, their brain is able to maintain this delicate balance, responding to sensory information or other input and then returning to a stable baseline.
“The nervous system has to operate on a knife’s edge in this narrow range of excitability,” Miller says. “It has to be excitable enough so different parts can influence one another, but if it gets too excited it goes off into chaotic activity.”
In that 2024 study, the researchers found that propofol knocks the brain out of this state, known as “dynamic stability.” As doses of the drug increased, the brain took longer and longer to return to its baseline state after responding to new input. This effect became increasingly pronounced until consciousness was lost.
For that study, the researchers devised a computational model that analyzes neural activity recorded from the brain. This technique allowed them to determine how the brain responds to perturbations such as an auditory tone or other sensory input, and how long it takes to return to its baseline stability.
In their new study, the researchers used the same technique to measure how the brain responds to not only propofol but two additional anesthesia drugs — ketamine and dexmedetomidine. Animals were given one of the three drugs while their brain activity was analyzed, including their response to auditory tones.
This study showed that the same destabilization induced by propofol also appears during administration of the other two drugs. This “universal signature” appears even though the three drugs have different molecular mechanisms: propofol binds to GABA receptors, inhibiting neurons that have those receptors; dexmedetomidine blocks the release of norepinephrine; and ketamine blocks NMDA receptors, suppressing neurons with those receptors.
Each of these pathways, the researchers hypothesize, affect the brain’s balance of stability and excitability in different ways, and each leads to an overall destabilization of this balance.
“All three of these drugs appear to do the exact same thing,” Miller says. “In fact, you could look at the destabilization measure we use and you can’t tell which drug is being applied.”
The researchers now plan to further investigate how each of these drugs may give rise to the same patterns of brain destabilization.
“The molecular mechanisms of ketamine and dexmedetomidine are a bit more involved than propofol mechanisms,” Eisen says. “A future direction is to do a meaningful model of what the biophysical effects of those are and see how that could lead to destabilization.”
Monitoring anesthesia
Now that the researchers have shown that three different anesthesia drugs produce similar destabilization patterns in the brain, they believe that measuring those patterns could offer a valuable way to monitor patients during anesthesia. While anesthesia is overall a very safe procedure, it does carry some risks, especially for very young children and for people over 65.
For adults suffering from dementia, anesthesia can make the condition worse, and it can also exacerbate neuropsychiatric disorders such as depression. These risks are higher if patients go into a deeper state of unconsciousness known as burst suppression.
To help reduce those risks, Miller and Brown, who is also an anesthesiologist at MGH, are developing a prototype device that can measure patients’ EEG readings while under anesthesia and adjust their dose accordingly. Currently, doctors monitor patients’ heart rate, blood pressure, and other vital signs during surgery, but these don’t give as accurate a reading of how deeply the patient is unconscious.
“If you can limit people’s exposure to anesthesia, if you give just enough and no more, you can reduce risks across the board,” Miller says.
Working with researchers at Brown University, the MIT team is now planning to run a small clinical trial of their monitoring device with patients undergoing surgery.
The research was funded by the U.S. Office of Naval Research, the National Institute of Mental Health, the Simons Center for the Social Brain, the Freedom Together Foundation, the Picower Institute, the National Science Foundation Computer and Information Science and Engineering Directorate, the Simons Collaboration on the Global Brain, the McGovern Institute, and the National Institutes of Health.
Scientists discover genetics behind leaky brain blood vessels in Rett syndromeBy showing the problem derives from genetic mutations that lead to overexpression of a microRNA, MIT researchers’ study points to potential treatment.MIT researchers have discovered that two common genetic mutations that cause Rett syndrome each set off a molecular chain of events that compromises the structural integrity of developing brain blood vessels, making them leaky. The study traces the problem to overexpression of a particular microRNA (miRNA-126-3p), and shows that tamping down the miRNA’s levels helps to rescue the vascular defect.
Rett syndrome is a severe developmental disorder affecting both the brain and body. It is caused by various mutations in the widely expressed MECP2 gene, but the first symptoms don’t become apparent until affected children (mostly girls) reach 2-3 years of age. Because that’s a critical time in development for the brain’s blood vessels, neuroscientists in The Picower Institute for Learning and Memory at MIT embarked on a study to model how two common but distinct MeCP2 mutations may affect vascular development and contribute to the disease’s profound neurological pathology.
To conduct the research published recently in Molecular Psychiatry, lead author Tatsuya Osaki and senior author Mriganka Sur developed advanced human tissue cultures to model vessel development, with and without the MeCP2 mutations. The cultures not only enabled them to model and closely observe how the mutations affected the vessels, but also allowed them to molecularly dissect the problems they observed and then to test an intervention that helped.
“A role for microRNAs in Rett syndrome has been shown, but now demonstrating that miRNA-126-3p is actually downstream of MeCP2 and directly implicated in the endothelial cell dysfunction is an important piece of the Rett syndrome puzzle,” says Sur, the Newton Professor of Neuroscience in the Picower Institute and MIT’s Department of Brain and Cognitive Sciences.
Building vessels and spotting leaks
Building on years of tissue engineering experience, including time as a postdoc in the lab of co-author and MIT mechanical engineering and biological engineering Professor Roger D. Kamm, Osaki built “3-dimensional microvascular networks” using human induced pluripotent stem cells (iPS cells) donated by patients with Rett syndrome. The donated cells were induced to become stem cells, and then endothelial cells (the backbone of blood vessels). Embedded in a gel and mixed with fibroblast cells, the endothelial cells self-assembled into networks of tubes, which Osaki then hooked up to microfluidics to provide circulation.
One set of the cultures harbored the mutation R306C. Osaki created a control microvasculature that was genetically identical except that it did not have the mutation. Another set of the cultures had the R168X mutation. And again, Osaki paired that with control culture that was identical except for the mutation using CRISPR.
The research team chose these two mutations because they are each relatively common but affect the MeCP2 gene differently, Sur says. The finding that each of these distinct Rett-causing mutations ultimately led to upregulating miRNA-126-3p and undermining blood vessel integrity suggests that vascular problems are indeed a central feature of the disease.
“There is something common across these mutations,” Sur says.
In particular, lab tests showed that the vessels harboring either mutation showed reduced expression of a protein called ZO-1, which is critical for ensuring that the junctions among endothelial cells in blood vessels form a tight seal (like the grout in a tile floor). ZO-1 also didn’t localize to those junctions as well. Sure enough, further tests showed that the Rett-mutation vessel cultures were relatively leaky compared to the controls.
Similar deficiencies were evident in another cell culture the team created, in which they added astrocyte cells to even more closely simulate the blood-brain-barrier (BBB), which tightly regulates what can go in or out of blood vessels and into the brain. BBB problems are widely suspected of contributing to neurodegenerative diseases such as Alzheimer’s, Huntington’s, and ALS and frontotemporal dementia.
To gain some insight into how the vascular problems might undermine neural function in Rett syndrome, the researchers exposed neurons to medium from their Rett vasculature cultures. Those nerve cells showed reduced electrical activity, a possible sign that secretions from the Rett endothelial cells disrupted the neurons.
Catching a culprit
Generally speaking, the role of MeCP2 is to repress the expression of other genes. The scientists’ expectation, therefore, was that when MeCP2 is compromised by mutations the result would be overexpression of many genes. Yet ZO-1 was downregulated. Something had to account for that and miRNAs were a suspect, Osaki says, because they function as regulators of gene expression.
“That’s why we hypothesized that we should have some mediator between the MeCP2 mutation and ZO-1 downregulation and the BBB permeability increase,” Osaki says. “We focused on the microRNAs.”
Indeed, by profiling miRNAs in the Rett cultures and the controls, the scientists found that miRNA-126-3p was overexpressed. And by sequencing RNA, the team identified more molecular pathways needed to support vascular integrity that were dysregulated in the Rett cultures.
While the sequencing and profile associated miRNA-126-3p upregulation with the altered molecular chain of events, Osaki and Sur sought more definitive proof. To obtain it, they treated the Rett-mutation cultures with an “antisense” — a molecule that reduces miRNA-126-3p levels. Doing that resulted in an increase in ZO-1 expression and a partial restoration of endothelial cell barrier function — meaning less leakiness — in the vessel cultures. Knocking down the miRNA’s expression also restored the molecular pathways the scientists were tracking to more healthy states.
It turns out that there is a drug that inhibits miR-126 called miRisten that is undergoing clinical testing for leukemia. Osaki and Sur say they are planning on administering it to mice modeling Rett syndrome to see if it helps them.
In addition to Osaki, Sur, and Kamm, the paper’s co-authors are Zhengpeng Wan, Koji Haratani, Ylliah Jin, Marco Campisi, and David Barbie.
Funding for the study came from sources including the National Institutes of Health, a MURI grant, The Freedom Together Foundation, and the Simons Center for the Social Brain.
How the brain handles the “cocktail party problem”Using a computational model, neuroscientists showed how the brain can selectively focus attention on one voice among others in a noisy environment.MIT neuroscientists have figured out how the brain is able to focus on a single voice among a cacophony of many voices, shedding light on a longstanding neuroscientific phenomenon known as the cocktail party problem.
This attentional focus becomes necessary when you’re in any crowded environment, such as a cocktail party, with many conversations going on at once. Somehow, your brain is able to follow the voice of the person you’re talking to, despite all the other voices that you’re hearing in the background.
Using a computational model of the auditory system, the MIT team found that amplifying the activity of the neural processing units that respond to features of a target voice, such as its pitch, allows that voice to be boosted to the forefront of attention.
“That simple motif is enough to cause much of the phenotype of human auditory attention to emerge, and the model ends up reproducing a very wide range of human attentional behaviors for sound,” says Josh McDermott, a professor of brain and cognitive sciences at MIT, a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines, and the senior author of the study.
The findings are consistent with previous studies showing that when people or animals focus on a specific auditory input, neurons in the auditory cortex that respond to features of the target stimulus amplify their activity. This is the first study to show that extra boost is enough to explain how the brain solves the cocktail party problem.
Ian Griffith, a graduate student in the Harvard Program in Speech and Hearing Biosciences and Technology, who is advised by McDermott, is the lead author of the paper. MIT graduate student R. Preston Hess is also an author of the paper, which appears today in Nature Human Behavior.
Modeling attention
Neuroscientists have been studying the phenomenon of selective attention for decades. Many studies in people and animals have shown that when focusing on a particular stimulus like the sound of someone’s voice, neurons that are tuned to features of that voice — for example, high pitch — amplify their activity.
When this amplification occurs, neurons’ firing rates are scaled upward, as though multiplied by a number greater than one. It has been proposed that these “multiplicative gains” allow the brain to focus its attention on certain stimuli. Neurons that aren’t tuned to the target feature exhibit a corresponding reduction in activity.
“The responses of neurons tuned to features that are in the target of attention get scaled up,” Griffith says. “Those effects have been known for a very long time, but what’s been unclear is whether that effect is sufficient to explain what happens when you’re trying to pay attention to a voice or selectively attend to one object.”
This question has remained unanswered because computational models of perception haven’t been able to perform attentional tasks such as picking one voice out of many. Such models can readily perform auditory tasks when there is an unambiguous target sound to identify, but they haven’t been able to perform those tasks when other stimuli are competing for their attention.
“None of our models has had the ability that humans have, to be cued to a particular object or a particular sound and then to base their response on that object or that sound. That’s been a real limitation,” McDermott says.
In this study, the MIT team wanted to see if they could train models to perform those types of tasks by enabling the model to produce neuronal activity boosts like those seen in the human brain.
To do that, they began with a neural network that they and other researchers have used to model audition, and then modified the model to allow each of its stages to implement multiplicative gains. Under this architecture, the activation of processing units within the model can be boosted up or down depending on the specific features they represent, such as pitch.
To train the model, on each trial the researchers first fed it a “cue”: an audio clip of the voice that they wanted the model to pay attention to. The unit activations produced by the cue then determined the multiplicative gains that were applied when the model heard a subsequent stimulus.
“Imagine the cue is an excerpt of a voice that has a low pitch. Then, the units in the model that represent low pitch would get multiplied by a large gain, whereas the units that represent high pitch would get attenuated,” Griffith says.
Then, the model was given clips featuring a mix of voices, including the target voice, and asked to identify the second word said by the target voice. The model activations to this mixture were multiplied by the gains that resulted from the previous cue stimulus. This was expected to cause the target voice to be “amplified” within the model, but it was not clear whether this effect would be enough to yield human-like attentional behavior.
The researchers found that under a variety of conditions, the model performed very similarly to humans, and it tended to make errors similar to those that humans make. For example, like humans, it sometimes made mistakes when trying to focus on one of two male voices or one of two female voices, which are more likely to have similar pitches.
“We did experiments measuring how well people can select voices across a pretty wide range of conditions, and the model reproduces the pattern of behavior pretty well,” Griffith says.
Effects of location
Previous research has shown that in addition to pitch, spatial location is a key factor that helps people focus on a particular voice or sound. The MIT team found that the model also learned to use spatial location for attentional selection, performing better when the target voice was at a different location from distractor voices.
The researchers then used the model to discover new properties of human spatial attention. Using their computational model, the researchers were able to test all possible combinations of target locations and distractor locations, an undertaking that would be hugely time-consuming with human subjects.
“You can use the model as a way to screen large numbers of conditions to look for interesting patterns, and then once you find something interesting, you can go and do the experiment in humans,” McDermott says.
These experiments revealed that the model was much better at correctly selecting the target voice when the target and distractor were at different locations in the horizontal plane. When the sounds were instead separated in the vertical plane, this task became much more difficult. When the researchers ran a similar experiment with human subjects, they observed the same result.
“That was just one example where we were able to use the model as an engine for discovery, which I think is an exciting application for this kind of model,” McDermott says.
Another application the researchers are pursuing is using this kind of model to simulate listening through a cochlear implant. These studies, they hope, could lead to improvements in cochlear implants that could help people with such implants focus their attention more successfully in noisy environments.
The research was funded by the National Institutes of Health.
3 Questions: Fortifying our planetary defensesMIT astronomers are developing a new way to detect, monitor, and mitigate the threats posed by smaller asteroids to our critical space infrastructure.When people think of asteroids, they tend to picture rare, civilization-ending impacts like those depicted in movies such as “Armageddon.” In reality, the asteroids most likely to affect modern society are much smaller. While kilometer-scale impacts occur only every tens of millions of years, decameter-scale (building-sized) objects strike Earth far more frequently: roughly every couple decades. As astronomers develop new ways to detect and track these smaller asteroids, planetary defense becomes increasingly relevant for protecting the space-based infrastructure that underpins modern life, from GPS navigation to global communications.
The good news for us earthlings is that a team of MIT researchers is on this space-case. Associate Professor Julien de Wit, Research Scientist Artem Burdanov, and their colleagues recently developed a new asteroid-detection method that could be used to track potential asteroid impactors and help protect our planet. They have now applied this new technique to the James Webb Space Telescope (JWST), demonstrating that JWST can be used to detect and characterize decameter-scale asteroids all the way out to the main belt, a crucial step in fortifying our planetary safety and security. De Wit and his colleagues recently co-led with with Andrew Rivkin PhD ’91 new observations of an asteroid called 2024 YR4, which made headlines last year when it was first discovered. They were able to determine that the asteroid will not collide with the Moon, which could have had impacts on Earth’s critical satellite systems.
De Wit, Burdanov, Assistant Professor Richard Teague, and Research Scientist Saverio Cambioni spoke to MIT News about the importance of planetary defense and how MIT astronomers are helping to lead the charge to ensure our planet’s safety.
Q: What is planetary defense and how is the field changing?
Burdanov: Planetary defense is a field of science and engineering that’s focused on preventing asteroids and comets from hitting the Earth. While traditionally the field has been focused on much larger asteroids, thanks to new observational capabilities the field is growing to include monitoring much smaller asteroids that could also have an impact.
De Wit: When people think about asteroids they tend to think of impacts along the lines of these rare, civilization-ending “dinosaur killer” asteroids — objects that are scientifically fascinating but, happily, statistically unlikely on human timescales. But as soon as you move to smaller asteroids, there are so many of them that you’re looking at impacts happening every few decades or less. That becomes much more relevant on human timescales.
Now that our society has become increasingly reliant on space-based infrastructure for communication, navigation technologies like GPS and satellite-based security systems, we can be affected by different populations of smaller asteroids. These smaller asteroids will probably lead to zero direct human casualties but would have very different consequences on our space infrastructure. At the same time, because they are smaller, they require different technologies to monitor and understand them, both for the detection and for the characterization. At MIT, we are working to redefine planetary defense in a way that is far more pertinent, personable, and practical — focusing on these much smaller asteroids that could have real consequences. In other words, planetary defense is no longer just about avoiding extinction-level events. It is about protecting the systems we depend on in the near term.
Q: Why are observations with telescopes like the James Webb Space Telescope (JWST) so important to keeping our planet safe?
Teague: We’re entering a time now where we have these large-scale sky surveys that are going to be producing an incredible amount of data. We’re trying to develop the framework here at MIT where we can sift through that data as quickly and efficiently as possible, and then use the resources that we have available, such as the optical and radio observatories that we run like the MIT Haystack and Wallace Observatories, to follow up on those potential threats as quickly as possible and determine whether they could be problematic.
We’ve been doing trial observations to try and piece together how fast we can do this. The challenging thing is that the smaller objects that we’ve been talking about, the decameter ones, they’re really hard to detect from the ground. They’re just so small, and so that’s why we really need to use space-based facilities like JWST to help keep our planet safe. JWST is just incomparable, really, for detecting these very small, faint objects. A lot of our work at the moment at MIT is trying to understand is how do we build that entire pipeline — from detection to risk assessment to mitigation — under one roof to make it as efficient as possible. And I think this is a really MIT-type of problem to solve. There’s not many places that have the same range of experts in astronomy and engineering and technology to really tackle this properly. It’s really exciting that MIT hosts all these sorts of experts that we’re bringing together to solve this problem and keep our planet safer.
Cambioni: There is going to be what I like to call an asteroid revolution coming up because in addition to JWST’s observational capabilities, there is a new observatory in Chile called the Vera Rubin Observatory that could increase the detection of known small objects in space by a factor of 10. The most important thing to keep in mind, though, is that this observatory will detect the objects but may lose a lot of them. This is where a part of our work is coming in, to basically follow that object and map it as soon as possible. Additionally, Vera Rubin only looks at the reflected light, and it doesn’t get a precise estimate of an asteroid’s size. This gap between detection and characterization is a fundamental problem of asteroid science, between how many objects we discover and how fast we can characterize them. At MIT, we are using our in-house capabilities to help characterize these objects. That includes the MIT Wallace Observatory and the MIT Haystack Observatory.
Q: What role can MIT play in this new era of planetary defense?
De Wit: The reality is that, given the occurrence rate of these smaller asteroids and the new observational capabilities now coming online — from the Rubin Observatory to space-based facilities like JWST — we expect that within the next decade we will identify a handful of decameter-scale objects whose trajectories place them on course to impact the Earth-Moon system within this century. At that point, society will face a very practical question: whether, and how, to respond. Because these are much smaller objects than the dinosaur-killing asteroids, the types of mitigation strategies that we may envision are different. This is also where I think MIT might have an important role to play in the development, design, and potentially even construction of cost-effective, rapid-response asteroid-mitigation strategies. To help organize that effort, we have begun bringing together researchers across the Institute through the Planetary Defense at MIT project, working closely with colleagues on the engineering side.
Teague: What I’m particularly excited about is the way we’ve managed to engage students at MIT in this research as well. We’ve really focused on the impactful research and the way we’re bridging departments and labs within MIT, and this has been a fantastic way to engage students with practical astronomy and research. Saverio has run an IAP [Independent Activities Period] course, and we’re also running a student observing lab with the Wallace Observatory, where we hire a cohort of students every semester, and they’re taught how to use these observatories remotely. They take the data, do the analysis, and this semester, we've got on the order of 10 undergraduate students that are going to be working throughout the semester to take these observations and help us build this observation pipeline.
It's great that here at MIT we’re not only pushing the forefront of the research, but we’re also training the next generation of astronomers that is going to come in and carry this project through and into the future.
Curiosity-driven research has long sparked technological transformations. A century ago, curiosity about atoms led to quantum mechanics, and eventually the transistor at the heart of modern computing. Conversely, the steam engine was a practical breakthrough, but it took fundamental research in thermodynamics to fully harness its power.
Today, artificial intelligence and science find themselves at a similar inflection point. The current AI revolution has been fueled by decades of research in the mathematical and physical sciences (MPS), which provided the challenging problems, datasets, and insights that made modern AI possible. The 2024 Nobel Prizes in physics and chemistry, recognizing foundational AI methods rooted in physics and AI applications for protein design, made this connection impossible to miss.
In 2025, MIT hosted a Workshop on the Future of AI+MPS, funded by the National Science Foundation with support from the MIT School of Science and the MIT departments of Physics, Chemistry, and Mathematics. The workshop brought together leading AI and science researchers to chart how the MPS domains can best capitalize on — and contribute to — the future of AI. Now a white paper, with recommendations for funding agencies, institutions, and researchers, has been published in Machine Learning: Science and Technology. In this interview, Jesse Thaler, MIT professor of physics and chair of the workshop, describes key themes and how MIT is positioning itself to lead in AI and science.
Q: What are the report’s key themes regarding last year’s gathering of leaders across the mathematical and physical sciences?
A: Gathering so many researchers at the forefront of AI and science in one room was illuminating. Though the workshop participants came from five distinct scientific communities — astronomy, chemistry, materials science, mathematics, and physics — we found many similarities in how we are each engaging with AI. A real consensus emerged from our animated discussions: Coordinated investment in computing and data infrastructures, cross-disciplinary research techniques, and rigorous training can meaningfully advance both AI and science.
One of the central insights was that this has to be a two-way street. It’s not just about using AI to do better science; science can also make AI better. Scientists excel at distilling insights from complex systems, including neural networks, by uncovering underlying principles and emergent behaviors. We call this the “science of AI,” and it comes in three flavors: science driving AI, where scientific reasoning informs foundational AI approaches; science inspiring AI, where scientific challenges push the development of new algorithms; and science explaining AI, where scientific tools help illuminate how machine intelligence actually works.
In my own field of particle physics, for instance, researchers are developing real-time AI algorithms to handle the data deluge from collider experiments. This work has direct implications for discovering new physics, but the algorithms themselves turn out to be valuable well beyond our field. The workshop made clear that the science of AI should be a community priority — it has the potential to transform how we understand, develop, and control AI systems.
Of course, bridging science and AI requires people who can work across both worlds. Attendees consistently emphasized the need for “centaur scientists” — researchers with genuine interdisciplinary expertise. Supporting these polymaths at every career stage, from integrated undergraduate courses to interdisciplinary PhD programs to joint faculty hires, emerged as essential.
Q: How do MIT’s AI and science efforts align with the workshop recommendations?
A: The workshop framed its recommendations around three pillars: research, talent, and community. As director of the NSF Institute for Artificial Intelligence and Fundamental Interactions (IAIFI) — a collaborative AI and physics effort among MIT and Harvard, Northeastern, and Tufts universities — I’ve seen firsthand how effective this framework can be. Scaling this up to MIT, we can see where progress is being made and where opportunities lie.
On the research front, MIT is already enabling AI-and-science work in both directions. Even a quick scroll through MIT News shows how individual researchers across the School of Science are pursuing AI-driven projects, building a pipeline of knowledge and surfacing new opportunities. At the same time, collaborative efforts like IAIFI and the Accelerated AI Algorithms for Data-Driven Discovery (A3D3) Institute concentrate interdisciplinary energy for greater impact. The MIT Generative AI Impact Consortium is also supporting application-driven AI work at the university scale.
To foster early-career AI-and-science talent, several initiatives are training the next generation of centaur scientists. The MIT Schwarzman College of Computing's Common Ground for Computing Education program helps students become “bilingual” in computing and their home discipline. Interdisciplinary PhD pathways are also gaining traction; IAIFI worked with the MIT Institute for Data, Systems, and Society to create one in physics, statistics, and data science, and about 10 percent of physics PhD students now opt for it — a number that's likely to grow. Dedicated postdoctoral roles like the IAIFI Fellowship and Tayebati Fellowship give early-career researchers the freedom to pursue interdisciplinary work. Funding centaur scientists and giving them space to build connections across domains, universities, and career stages has been transformative.
Finally, community-building ties it all together. From focused workshops to large symposia, organizing interdisciplinary events signals that AI and science isn’t siloed work — it’s an emerging field. MIT has the talent and resources to make a significant impact, and hosting these gatherings at multiple scales helps establish that leadership.
Q: What lessons can MIT draw about further advancing its AI-and-science efforts?
A: The workshop crystallized something important: The institutions that lead in AI and science will be the ones that think systematically, not piecemeal. Resources are finite, so priorities matter. Workshop attendees were clear about what becomes possible when an institution coordinates hires, research, and training around a cohesive strategy.
MIT is well positioned to build on what’s already underway with more structural initiatives — joint faculty lines across computing and scientific domains, expanded interdisciplinary degree pathways, and deliberate “science of AI” funding. We’re already seeing moves in this direction; this year, the MIT Schwarzman College of Computing and the Department of Physics are conducting their first-ever joint faculty search, which is exciting to see.
The virtuous cycle of AI and science has the potential to be truly transformative — offering deeper insight into AI, accelerating scientific discovery, and producing robust tools for both. By developing an intentional strategy, MIT will be well positioned to lead in, and benefit from, the coming waves of AI.
Finding a nanoscale solution to safer spaceflightUsing boron nitride nanotubes, mechanical engineering doctoral student Palak Patel develops materials for space that block dangerous ionizing radiation.“I’ve loved space for as long as I can remember,” says Palak Patel, a sixth-year doctoral student in MIT’s Department of Mechanical Engineering (MechE). As a girl, she “devoured” books about planets in the solar system, and her parents nurtured her growing interest in space through visits to observatories, air and space museums, and NASA centers. Spending time with her grandfather, who oversaw the radiation protection division of India’s Bhabha Atomic Research Center, also made a big impression on her.
Now, Patel specializes in developing advanced materials that could transform the future of human spaceflight. “My research fundamentally tries to figure out how to keep astronauts safe in space,” she says. From designing radiation-shielding nanocomposites to training as an analog astronaut, she’s at the forefront of work that bridges the nanoscale and interplanetary scale.
Born in the United States, she moved to India at age 13. As an undergraduate in mechanical engineering there, she became heavily focused on research. Patel interned at the Indian Space Research Organization (ISRO) during her senior year, where she was drawn to the challenges of space-grade manufacturing. “It’s one of the few areas where you need things to be really precise and clean and perfect,” she says.
After graduation, she joined a company that built components for ISRO missions as a project engineer. She was in charge of setting up a facility and standard operating procedure for manufacturing rectangular aluminum waveguide bends and twists for satellites — a process that she had helped ISRO develop and optimize as an intern. The experience cemented her interest in space research — and prompted her application to MIT. “I wanted something a bit more technical, a bit more research-focused,” she says.
Harnessing the power of nanotubes
At MIT, Patel joined the lab of Brian Wardle in the Department of Aeronautics and Astronautics (AeroAstro). She specializes in synthesizing nanotubes and manufacturing multifunctional nanocomposites — tiny cylindrical structures with hollow cores, known for their remarkable strength and versatility.
For her master’s degree, she used her mechanical engineering expertise to integrate nanotubes into existing aerospace materials. “Modern-day airplanes are more than 50 percent composite materials — glass fiber, carbon, fiber composites,” she explains. “Putting carbon nanotubes into existing composites can improve their mechanical properties and add multifunctionalities.”
Beyond structural enhancement, the nanotubes provide additional functionalities. For instance, integrating nanotubes into composite materials allows airplane wings to resist ice formation, extending flight durations. The materials can also help detect cracks before catastrophic failures occur.
After finishing her master’s studies, Patel decided she wanted to focus explicitly on space applications, so Wardle connected her with colleagues at NASA. One of them, Valerie Wiesner — a NASA scientist who would later become her research mentor — introduced Patel to boron nitride nanotubes, which have a different superpower: radiation shielding.
Developing safer materials for spaceflight
Ionizing radiation is one of the biggest obstacles to space travel. When space radiation hits the aluminum used in most spacecraft, it can create dangerous secondary neutrons — a serious risk for humans on board. “You can’t safely travel to Mars with the current state-of-the-art materials,” Patel says.
Boron nitride nanotubes offer a lightweight, high-performance way to block that radiation without compromising mechanical integrity. And thanks to a breakthrough process developed in Wardle’s lab, Patel can synthesize them at concentrations far beyond NASA’s previous limits — up to 50 percent by weight, compared to 5-10 percent in earlier composites.
This kind of work requires an unusual blend of disciplines, and Patel credits her coursework at MIT for helping her build a strong foundation. “When you think about manufacturing on a large scale, you’re like, I could just figure out how to cut this. But then, on a micro and nano scale, you can’t physically take a knife and cut anything. You have to think about chemical methods and atomic scale synthesis and processes.”
Patel’s research earned her a prestigious NASA Space Technology Graduate Research Opportunities fellowship, which allows her to regularly test her materials at multiple NASA sites. “MIT is the only place where you can synthesize these nanotubes the way we do,” she says. “We’ve got some results that look great.”
In May 2025, Patel took part in a microgravity flight to assess the feasibility of manufacturing these materials in space. The mission was successful: The nanotubes she manufactured have since made it to the International Space Station (ISS).
In addition to her primary research on boron nitride nanotubes, Patel also participates in NASA competitions aimed at solving practical space exploration challenges. Her first project involved developing a system to drill into lunar and Martian surfaces to extract water, tapping her hands-on engineering skills. These competitions have not only provided her with practical experience, but have also led to additional collaborations with NASA scientists.
Patel also participated in a Swiss-based analog mission called Asclepios III, serving as the CAPCOM (capsule communicator) for the analog astronaut team. The 14-day mission involved extreme environment training. “We did mini-parabolic flights to where you can experience microgravity in a plane, which is really nice. And it was in Italy, over the Alps, so that made it twice as nice!” she says.
“The best part of MIT”
When she’s not at NASA, Patel splits her time between the AeroAstro and MechE departments — and between her lab work and her hobbies. Most of her extracurricular activities involve her friends, whether it’s paint nights (painting planets in abstract form is one of her favorite subjects), playing soccer, or exploring the outdoors, especially skiing, hiking, kayaking, and camping. “My time with friends here at MIT has been really important to me. I’ve made so many important friendships along the way,” she says.
Now in the home stretch of her PhD, Patel is focusing on developing novel materials for spaceflight applications, from improving thermal protection systems to safeguard astronauts during atmospheric re-entry to mitigating the impact of lunar dust — a significant problem during the Apollo missions, she notes. “The dust, sharp and electrostatic, stuck to everything and cut through spacesuits.”
After graduating, she plans to continue working on technologies that support human spaceflight. “The space industry is at a really exciting stage with the return to the moon and the focus on getting humans to Mars. I think it would be really fun to enter the industry at the moment and work closer to where all the action is happening. I imagine it being very similar to how people felt working on the Apollo, space shuttle, and ISS missions years ago.”
No matter where her career leads her next, Patel feels well prepared.
“There are amazing opportunities at MIT, and I’ve gotten to work on some really cool projects,” she says. “But it’s only cool because I get to work with other people. The students, the staff, the professors — they’re the best part of MIT.”
3 Questions: Building predictive models to characterize tumor progressionAssistant Professor Matthew Jones is working to decode molecular processes on the genetic, epigenetic, and microenvironment levels to anticipate how and when tumors evolve to resist treatment.Just as Darwin’s finches evolved in response to natural selection in order to endure, the cells that make up a cancerous tumor similarly counter selective pressures in order to survive, evolve, and spread. Tumors are, in fact, complex sets of cells with their own unique structure and ability to change.
Today, artificial Intelligence and machine learning tools offer an unparalleled opportunity to illuminate the generalizable rules governing tumor progression on the genetic, epigenetic, metabolic, and microenvironmental levels.
Matthew G. Jones, an assistant professor in the MIT Department of Biology, the Koch Institute for Integrative Cancer Research, and the Institute for Medical Engineering and Science, hopes to use computational approaches to build predictive models — to play a game of chess with cancer, making sense of a tumor’s ability to evolve and resist treatment with the ultimate goal of improving patient outcomes. In this interview, he describes his current work.
Q: What aspect of tumor progression are you working to explore and characterize?
A: A very common story with cancer is that patients will respond to a therapy at first, and then eventually that treatment will stop working. The reason this largely happens is that tumors have an incredible, and very challenging, ability to evolve: the ability to change their genetic makeup, protein signaling composition, and cellular dynamics. The tumor as a system also evolves at a structural level. Oftentimes, the reason why a patient succumbs to a tumor is because either the tumor has evolved to a state we can no longer control, or it evolves in an unpredictable manner.
In many ways, cancers can be thought of as, on the one hand, incredibly dysregulated and disorganized, and on the other hand, as having their own internal logic, which is constantly changing. The central thesis of my lab is that tumors follow stereotypical patterns in space and time, and we’re hoping to use computation and experimental technology to decode the molecular processes underlying these transformations.
We’re focused on one specific way tumors are evolving through a form of DNA amplification called extrachromosomal DNA. Excised from the chromosome, these ecDNAs are circularized and exist as their own separate pool of DNA particles in the nucleus.
Initially discovered in the 1960s, ecDNA were thought to be a rare event in cancer. However, as researchers began applying next-generation sequencing to large patient cohorts in the 2010s, it seemed like not only were these ecDNA amplifications conferring the ability of tumors to adapt to stresses, and therapies, faster, but that they were far more prevalent than initially thought.
We now know these ecDNA amplifications are apparent in about 25 percent of cancers, in the most aggressive cancers: brain, lung, and ovarian cancers. We have found that, for a variety of reasons, ecDNA amplifications are able to change the rule book by which tumors evolve in ways that allow them to accelerate to a more aggressive disease in very surprising ways.
Q: How are you using machine learning and artificial intelligence to study ecDNA amplifications and tumor evolution?
A: There’s a mandate to translate what I’m doing in the lab to improve patients’ lives. I want to start with patient data to discover how various evolutionary pressures are driving disease and the mutations we observe.
One of the tools we use to study tumor evolution is single-cell lineage tracing technologies. Broadly, they allow us to study the lineages of individual cells. When we sample a particular cell, not only do we know what that cell looks like, but we can (ideally) pinpoint exactly when aggressive mutations appeared in the tumor’s history. That evolutionary history gives us a way of studying these dynamic processes that we otherwise wouldn’t be able to observe in real time, and helps us make sense of how we might be able to intercept that evolution.
I hope we’re going to get better at stratifying patients who will respond to certain drugs, to anticipate and overcome drug resistance, and to identify new therapeutic targets.
Q: What excited you about joining the MIT community?
A: One of the things that I was really attracted to was the integration of excellence in both engineering and biological sciences. At the Koch Institute, every floor is structured to promote this interface between engineers and basic scientists, and beyond campus, we can connect with all the biomedical research enterprises in the greater Boston area.
Another thing that drew me to MIT was the fact that it places such a strong emphasis on education, training, and investing in student success. I’m a personal believer that what distinguishes academic research from industry research is that academic research is fundamentally a service job, in that we are training the next generation of scientists.
It was always a mission of mine to bring excellence to both computational and experimental technology disciplines. The types of trainees I’m hoping to recruit are those who are eager to collaborate and solve big problems that require both disciplines. The KI [Koch Institute] is uniquely set up for this type of hybrid lab: my dry lab is right next to my wet lab, and it’s a source of collaboration and connection, and that reflects the KI’s general vision.
How Joseph Paradiso’s sensing innovations bridge the arts, medicine, and ecologyFrom early motion-sensing platforms to environmental monitoring, the professor and head of the Program in Media Arts and Sciences has turned decades of cross-disciplinary research into real-world impact.Joseph Paradiso thinks that the most engaging research questions usually span disciplines.
Paradiso was trained as a physicist and completed his PhD in experimental high-energy physics at MIT in 1981. His father was a photographer and filmmaker working at MIT, MIT Lincoln Laboratory, and the MITRE Corporation, so he grew up in a house where artists, scientists, and engineers regularly gathered and interesting music was always playing.
That mix of influences led him to the MIT Media Lab, where he is the Alexander W. Dreyfoos Professor, academic head of the Program in Media Arts and Sciences, and director of the Responsive Environments research group.
At the Media Lab, Paradiso conducts research that engages sensing of different kinds and applies it across diverse and often extreme applications. He works on developing technologies that can efficiently capture and process multiple sensing modalities, and leverages this capability in application domains like the internet of things, medicine, environmental sensing, space exploration, and artistic expression. These efforts use that information to help people better understand the world, express themselves, and connect with one another.
Early in his career, Paradiso helped pioneer the field of wireless wearable sensing. He built many systems with multiple embedded sensors that could send information from the human body in real-time. One of his early flagship projects in this area was a pair of shoes fielded in 1997 for real-time augmented dance performance that embedded 16 sensors in each shoe, allowing wearers’ movements to directly generate music through algorithmic mapping. And Paradiso’s research at the Media Lab has consistently focused on sensing and using that information in new ways.
“When I would list all the sensors … people would laugh. But now, my watch is measuring most of these things,” Paradiso notes. “The world has moved.”
That progression from early prototypes to everyday technology helped lay the groundwork for devices people now use regularly to track activity, health, and performance.
As sensing systems improved, Paradiso expanded his work from individuals to groups. He developed platforms that allowed dance ensembles to create music together through their collective motion. Achieving this required Paradiso and his team to develop new ways for compact wearable devices to communicate wirelessly at high speed, as well as new approaches to real-time data processing and extending the range of available microelectromechanical systems (MEMS) sensors.
Those same sensing platforms were later adapted for sports medicine in 2006. Working with doctors who support elite athletes, his array of compact, wearable sensors captured large amounts of high-speed motion data from multiple points on the body, aimed at helping clinicians assess injury risk, performance, and recovery on the go, without the complex equipment typically associated with biomechanical monitoring and clinical settings.
More recently, Paradiso’s research has extended beyond humans. Through collaborations with National Geographic Explorers, his team has deployed sensors in remote environments to study animal behavior, including low-power compact wearable devices to detect the environmental conditions around the animal as well as track them (currently on lions and hyenas in Botswana and goats in Chile), and acoustic sensors with onboard AI to detect and monitor populations of endangered honeybees in Patagonia. This work provides new ways to understand how ecosystems function and how the planet is changing.
Paradiso was named an IEEE Fellow in January, recognizing his achievement in wireless wearable sensing and mobile energy harvesting. This is the highest grade of membership in IEEE, the world’s leading professional association dedicated to advancing technology for the benefit of humanity.
Across art, health, and the natural world, Paradiso’s work reflects how foundational research at MIT can seed technologies that ripple outward over time, shaping new applications and opening new fields. As advances in wearable technologies drive the rush toward the ever-more-connected human, a persistent existential question lurks.
“Where do I stop, versus others begin?” Paradiso asks.
For him, the aim is not novelty for its own sake, but amplification: using technology to help people become more perceptive, better connected, and more aware of their place in a larger system.
Understanding how “marine snow” acts as a carbon sinkA new study finds hitchhiking bacteria dissolve essential ballast in ubiquitous “snow” particles, which could counteract the ocean’s ability to sequester carbon.In some parts of the deep ocean, it can look like it’s snowing. This “marine snow” is the dust and detritus that organisms slough off as they die and decompose. Marine snow can fall several kilometers to the deepest parts of the ocean, where the particles are buried in the seafloor for millennia.
Now, researchers at MIT and their collaborators have found that as marine snow falls, tiny hitchhikers may limit how deep the particles can sink before dissolving away. The team shows that when bacteria hitch a ride on marine snow particles, the microbes can eat away at calcium carbonate, which is an essential ballast that helps particles sink.
The findings, which appear this week in the Proceedings of the National Academy of Sciences, could explain how calcium carbonate dissolves in shallow layers of the ocean, where scientists had assumed it should remain intact. The results could also change scientists’ understanding of how quickly the ocean can sequester carbon from the atmosphere.
Marine snow is a main vehicle by which the ocean stores carbon. At the ocean’s surface, phytoplankton absorb carbon dioxide from the atmosphere and convert the gas into other forms of carbon, including calcium carbonate — the same stuff that’s found in shells and corals. When they die, bits of phytoplankton drift down through the ocean as marine snow, carrying the carbon with them. If the particles make it to the deep ocean, the carbon they carry can be buried and locked away for hundreds to thousands of years.
But the new study suggests bacteria may be working against the ocean’s ability to sequester carbon. By eroding the particles’ calcium carbonate, bacteria can significantly slow the sinking of marine snow. The more they linger, the more likely the particles are to be respired quickly, releasing carbon dioxide into the shallow ocean, and possibly back into the atmosphere.
“What we’ve shown is that carbon may not sink as deep or as fast as one may expect,” says study co-author Andrew Babbin, an associate professor in the Department of Earth, Atmospheric and Planetary Sciences and a mission director at the Climate Project at MIT. “As humanity tries to design our way out of the problem of having so much CO2 in the atmosphere, we have to take into account these natural microbial mechanisms and feedbacks.”
The study’s primary author is Benedict Borer, a former MIT postdoc who is now an assistant professor of marine and coastal sciences at the Rutgers School of Environmental and Biological Sciences; co-authors include Adam Subhas and Matthew Hayden at the Woods Hole Oceanographic Institution and Ryan Woosley, a principal research scientist at MIT’s Center for Sustainability Science and Strategy.
Losing weight
Marine snow acts as the ocean’s main “biological pump,” the process by which the ocean pulls carbon from the surface down into the deep ocean. Scientists estimate that marine snow is responsible for drawing down billions of tons of carbon each year. Marine snow’s ability to sink comes mainly from minerals such as calcium carbonate embedded within the particles. The mineral is a dense ballast that weighs down the particle. The more calcium carbonate a particle has, the faster it sinks.
Scientists had assumed based on thermodynamics that calcium carbonate should not dissolve within the ocean’s upper layers, given the general temperature and pH conditions in the surface ocean. Any calcium carbonate that is bound up in marine snow should then safely sink to depths greater than 1,000 meters without dissolving along the way.
But oceanographers have long observed signs of dissolved calcium carbonate in the upper layers of the ocean, suggesting that something other than the ocean’s macroscale conditions was dissolving the mineral and slowing down the ocean’s biological pump.
And indeed, the MIT team has found that what is dissolving calcium carbonate in shallow waters is a microscale process that occurs within the immediate environment of an individual particle.
“Most oceanographers think about the macroscale, and in this instance what’s happening in microscopic particles is what is actually controlling bulk seawater chemistry,” Borer says. “Consequences abound for the ocean’s carbon dioxide sequestration capacity.”
A sinking sweetspot
In their new study, the researchers set up an experiment to simulate a sinking particle of marine snow and its interactions at the microscale. The team synthesized particles similar to marine snow that they made from varying concentrations of calcium carbonate and bacteria — organisms that are often found feasting on the particles in the ocean.
“The ocean is a fairly dilute medium with respect to organic matter,” Babbin says. “So organisms like bacteria have to search for food. And particles of marine snow are like cheeseburgers for bacteria.”
The team designed a small microfluidic chip to contain the particles, and flowed seawater through the chip at various rates to simulate different sinking speeds in the ocean. Their experiments revealed that whenever particles hosted any bacteria, they also rapidly lost some calcium carbonate, which dissolved into the surrounding seawater. As bacteria feed on the particles’ organic material, the microbes excrete acidic waste products that act to dissolve the particles’ inorganic, ballasting calcium carbonate.
The researchers also found that the amount of calcium carbonate that dissolves depends on how fast the particles sink. They flowed seawater around the particles at slow, intermediate, and fast speeds and found that both slow and fast sinking limit the amount of calcium carbonate that’s dissolved. With slow sinking, particles don’t receive as much oxygen from their surroundings, which essentially suffocates any hitchhiking bacteria. When particles sink quickly, bacteria may be sufficiently oxygenated, but any waste products that they produce can be easily flushed away before they can dissolve the particles’ calcium carbonate.
At intermediate speeds, there is a sweet spot: Bacteria are sufficiently oxygenated and can also build up enough waste, enabling the microbes to efficiently dissolve calcium carbonate.
Overall, the work shows that bacteria can have a significant effect on marine snow’s ability to sink and sequester carbon in the deep ocean. Bacteria can be found everywhere, and particularly in the shallower ocean regions. Even if macroscale conditions in these upper layers should not dissolve calcium carbonate, the study finds bacteria working at the microscale most likely do.
The findings could explain oceanographers’ observations of dissolved calcium carbonate in shallow ocean regions. They also illustrate that bacteria and other microbes may be working against the ocean’s natural ability to sequester carbon, by dissolving marine snow’s ballast and slowing its descent into the deep ocean. As humans consider climate solutions that involve enhancing the ocean’s biological pump, the researchers emphasize that bacteria’s role must be taken into account.
“Insights from this work are vital to predict how ecosystems will respond to marine carbon dioxide removal attempts, and overall how the oceans will change in response to future climate scenarios,” says Benedict Borer, who carried out the study’s experiments as a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences.
This work was supported, in part, by the Simons Foundation, the National Science Foundation, and the Climate Project at MIT.
Neurons receive precisely tailored teaching signals as we learnNew work suggests the brain can deliver neuron-specific feedback during learning — resembling the error signals that drive machine learning.When we learn a new skill, the brain has to decide — cell by cell — what to change. New research from MIT suggests it can do that with surprising precision, sending targeted feedback to individual neurons so each one can adjust its activity in the right direction.
The finding echoes a key idea from modern artificial intelligence. Many AI systems learn by comparing their output to a target, computing an “error” signal, and using it to fine-tune connections within the network. A long-standing question has been whether the brain also uses that kind of individualized feedback. In an open-access study published in the Feb. 25 issue of the journal Nature, MIT researchers report evidence that it does.
A research team led by Mark Harnett, a McGovern Institute for Brain Research investigator and associate professor in the Department of Brain and Cognitive Sciences at MIT, discovered these instructive signals in mice by training animals to control the activity of specific neurons using a brain-computer interface (BCI). Their approach, the researchers say, can be used to further study the relationships between artificial neural networks and real brains, in ways that are expected to both improve understanding of biological learning and enable better brain-inspired artificial intelligence.
The changing brain
Our brains are constantly changing as we interact with the world, modifying their circuitry as we learn and adapt. “We know a lot from 50 years of studies that there are many ways to change the strength of connections between neurons,” Harnett says. “What the field really lacks is a way of understanding how those changes are orchestrated to actually produce efficient learning.”
Some actions — and the neural connections that enable them — are reinforced with the release of neuromodulators like dopamine or norepinephrine in the brain. But those signals are broadcast to large groups of neurons, without discriminating between cells’ individual contributions to a failure or a success. “Reinforcement learning via neuromodulators works, but it’s inefficient, because all the neurons and all the synapses basically get only one signal,” Harnett says.
Machine learning uses an alternative, and extremely powerful, way to learn from mistakes. Using a method called back propagation, artificial neural networks compute an error signal and use it to adjust their individual connections. They do this over and over, learning from experience how to fine-tune their networks for success. “It works really well and it’s computationally very effective,” Harnett says.
It seemed likely that brains might use similar error signals for learning. But neuroscientists were skeptical that brains would have the precision to send tailored signals to individual neurons, due to the constraints imposed by using living cells and circuits instead of software and equations. A major problem for testing this idea was how to find the signals that provide personalized instructions to neurons, which are called vectorized instructive signals. The challenge, explains Valerio Francioni, first author of the Nature paper and a former postdoc in Harnett’s lab, is that scientists don’t know how individual neurons contribute to specific behaviors.
“If I was recording your brain activity while you were learning to play piano,” Francioni explains, “I would learn that there is a correlation between the changes happening in your brain and you learning piano. But if you asked me to make you a better piano player by manipulating your brain activity, I would not be able to do that, because we don’t know how the activity of individual neurons map to that ultimate performance.”
Without knowing which neurons need to become more active and which ones should be reined in, it is impossible to look for signals directing those changes.
Understanding neuron function
To get around this problem, Harnett’s team developed a brain-computer interface task to directly link neural activity and reward outcome — akin to linking the keys of the piano directly to the activity of single neurons. To succeed at the task, certain neurons needed to increase their activity, whereas others were required to decrease their activity.
They set up a BCI to directly link activity in those neurons — just eight to 10 of the millions of neurons in a mouse’s brain — to a visual readout, providing sensory feedback to the mice about their performance. Success was accompanied by delivery of a sugary reward.
“Now if you ask me, ‘How does the mouse get more rewards? Which neuron do you have to activate and which neuron do you have to inhibit?’ I know exactly what the answer to that question is,” says Francioni, whose work was supported by a Y. Eva Tan Fellowship from the Yang Tan Collective at MIT.
The scientists didn’t know the exact function of the particular neurons they linked to the BCI, but the cells were active enough that mice received occasional rewards whenever the signals happened to be right. Within a week, mice learned to switch on the right neurons while leaving the other set of neurons inactive, earning themselves more rewards.
Francioni monitored the target neurons daily during this learning process using a powerful microscope to visualize fluorescent indicators of neural activity. He zeroed in on the neurons’ branching dendrites, where the appropriate feedback signals have long been suspected to arrive. At the same time, he tracked activity in the parent cell bodies of those neurons. The team used these data to examine the relationship between signals received at a neuron’s dendrites and its activity, as well as how these changed when mice were rewarded for activating the right neurons or when they failed at their task.
Vectorized neural signals
They concluded that the two groups of neurons whose activity controlled the BCI in opposite ways, also received opposing error signals at their dendrites as the mice learned. Some were told to ramp up their activity during the task, while others were instructed to dial it down. What’s more, when the team manipulated the dendrites to inhibit these instructive signals, mice failed to learn the task. “This is the first biological evidence that vectorized [neuron-specific] signal-based instructive learning is taking place in the cortex,” Harnett says.
The discovery of vectorized signals in the brain — and the team’s ability to find them — should promote more back-and-forth between neuroscientists and machine learning researchers, says postdoc Vincent Tang. “It provides further incentive for the machine learning community to keep developing models and proposing new hypotheses along this direction,” he says. “Then we can come back and test them.”
The researchers say they are just as excited about applying their approach to future experiments as they are about their current discovery.
“Machine learning offers a robust, mathematically tractable way to really study learning. The fact that we can now translate at least some of this directly into the brain is very powerful,” Francioni says.
Harnett says the approach opens new opportunities to investigate possible parallels between the brain and machine learning. “Now we can go after figuring out, how does cortex learn? How do other brain regions learn? How similar or how different is it to this particular algorithm? Can we figure out how to build better, more brain-inspired models from what we learn from the biology?” he says. “This feels like a really big new beginning.”
Studying the genetic basis of disease to explore fundamental biological questionsEliezer Calo’s studies of craniofacial malformations have yielded insight into protein synthesis and embryonic development.When Associate Professor Eliezer Calo PhD ’11 was applying for faculty positions, he was drawn to MIT not only because it’s his alma mater, but also because the Department of Biology places high value on exploring fundamental questions in biology.
In his own lab, Calo studies how craniofacial malformations arise. One motivation is to seek new treatments for those conditions, but another is to learn more about fundamental biological processes such as protein synthesis and embryonic development.
“We use genes that are mutated in disease to uncover fundamental biology,” Calo says. “Mutations that happen in disease are an experiment of nature, telling us that those are the important genes, and then we follow them up not only to understand the disease, but to fundamentally understand what the genes are doing.”
Calo’s work has led to new insights into how ribosomes form and how they control protein synthesis, as well as how the nucleolus, the birthplace of ribosomes in eukaryotic cells, has evolved over hundreds of millions of years.
In addition to earning his PhD at MIT, Calo is also an alumnus of MIT’s Summer Research Program (MSRP), which helps to prepare undergraduate students to pursue graduate education. Since starting his lab at MIT, Calo has made a point to serve as a research mentor for the program every summer.
“I feel that it’s important to pay back to the program that helped me realize what I wanted to do,” he says.
A nontraditional path
Growing up in a mountainous region of Puerto Rico, Calo was the first person from his family to finish high school. While attending the University of Puerto Rico at Rio Piedras, the largest university in Puerto Rico, he explored a few different majors before settling on chemistry.
One of Calo’s chemistry professors invited him to work in her lab, where he did a research project studying the pharmacokinetics of cell receptors found on the surface of astrocytes, a type of brain cell.
“It was a good mix of biology and chemistry,” he says. “I think that that was the catalyst to my pursuit of a career in the sciences.”
He learned about MSRP from Mandana Sassanfar, a senior lecturer in biology at MIT and director of outreach for several MIT departments, at an event hosted by the University of Puerto Rico for students interested in careers in science. He was accepted into the program, and during the summer after his junior year, he worked in the lab of Stephen Bell, an MIT professor of biology. That experience, he says, was transformative.
“Without that experience, I would have probably chosen another career,” Calo says. In Puerto Rico, “science was fun, but it was a struggle. We had to make everything from scratch, and then you spend more time making reagents than doing the experiments. When I came to MIT, I was always doing experiments.”
During that time, he realized he liked working in biology labs more than chemistry labs, so when he applied to graduate school, he decided to move into biology. He applied to five schools, including MIT. “Once MIT sent me the acceptance, I just had to say yes. There was no saying no.”
At MIT, Calo thought he might study biochemistry, but he ended up focusing on cancer biology instead, working with Jacqueline Lees, an MIT biology professor, to study the role of the tumor suppressor protein Rb.
After finishing his PhD, Calo felt burnt out and wasn’t sure if he wanted to continue along the academic track. His thesis committee advisors encouraged him to do a postdoc just to try it out, and he ended up going to Stanford University, where he fell in love with California and switched to a new research focus. Working with Joanna Wysocka, a professor of developmental biology at Stanford, he began investigating how development is affected by the regulation of proteins that make up cellular ribosomes — a topic his lab still studies today.
Returning to MIT
When searching for faculty jobs, Calo focused mainly on schools in California, but also sent an application to MIT. As he was deciding between offers from MIT and the University of California at Berkeley, a phone call from Angelika Amon, the late MIT professor of biology, convinced him to take the cross-country leap back to MIT.
“She had me on the phone for more than one hour telling me why I should come to MIT,” he recalls. “And that was so heartwarming that I could not say no.”
Since starting his lab in 2017, Calo has been studying how defects in the production of ribosomes give rise to diseases, in particular craniofacial malformations such as cleft palate.
Ribosomes, the organelles where protein synthesis occurs, consist of two subunits made of about 80 proteins. A longstanding question in biology has been why mutations that affect ribosome formation appear to primarily affect the development of the face, but not the rest of the body.
In a 2018 study, Calo discovered that this is because the mutations that affect ribosomes can have secondary effects that influence craniofacial development. In embryonic cells that form the face, a mutation in a gene called TCOF1 activates p53 at a higher level than in other embryonic cells. High levels of p53 cause some of those cells to undergo programmed cell death, leading to Treacher-Collins Syndrome, a disorder that produces underdeveloped bones in the jaw and cheek.
His lab has shown that p53 overactivation is also responsible for craniofacial disorders caused by mutations in RNA splicing factors.
Calo’s work on ribosome formation also led him to explore another cell organelle known as the nucleolus, whose role is to help build ribosomes. In 2023, he found that a gene called TCOF1, which can lead to craniofacial malformations when mutated, is critical for forming the three compartments that make up the nucleolus.
That finding, he says, could help to explain a major evolutionary shift that occurred around 300 million years ago, when the nucleolus transitioned from two to three compartments. This “tripartite” nucleolus is found in all reptiles, birds, and mammals.
“That was quite surprising,” Calo says. “Studying disease-related genes allowed us to understand a very fundamental biological process of how the nucleolus evolved, which has been a question in the field that nobody could figure out the answer for.”
X-raying rocks reveals their carbon-storing capacityNew research by MIT geophysicists could assist efforts to remove carbon from the atmosphere and store it underground.To avoid the worst effects of climate change, many billions of metric tons of industrially generated carbon dioxide will have to be captured and stored away by the end of this century. One place to store such an enormous amount of greenhouse gas is in the Earth itself. If carbon dioxide were pumped into the cracks and crevices of certain underground rocks, the fluid would react with the rocks and solidify carbon into minerals. In this way, carbon dioxide could potentially be locked in the rocks in stable form for millions of years without escaping back into the atmosphere.
Some pilot projects are already underway to demonstrate such “carbon mineralization.” These efforts have shown promising results in terms of successfully mineralizing a large fraction of injected CO2. However, it’s less clear how the rocks will evolve in response. As carbonate minerals build up, could they clog up cracks and crevices, and ultimately limit the amount of CO2 that can be stored there?
In a new study appearing today in the journal AGU Advances, MIT geophysicists explored this question by injecting fluid into rocks and using X-ray imaging to reveal how the rocks’ pores and cracks changed as the fluid mineralized over time.
Their experiments showed that as fluid was pumped into a rock, the rock’s permeability (the ability of fluid to flow through the rock) dropped sharply. Meanwhile, the rock’s porosity (its total amount of empty space, in the form of pores, cracks, and crevices) remained relatively the same.
The researchers found that the minerals were precipitating out of the fluid in the narrower tunnels connecting larger pores, preventing the fluid from flowing into larger pore spaces. Even so, the fluid did keep flowing through the rock, albeit at a lower rate, and minerals continued to form in some cracks and crevices.
“This study gives you information about what the rock does during this complex mineralization process, which could give you ideas of how to engineer it in your favor,” says study co-author Matėj Peč, an associate professor of geophysics at MIT.
“If you were injecting CO2 into the Earth and saw a massive drop in permeability, some operators might think they clogged up the well,” adds co-author Jonathan Simpson, a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “But as this study shows, in some cases, it might not matter that much. As long as you maintain some flow rate, you could still form minerals and sequester carbon.”
The study’s co-authors include EAPS Research Scientist Hoagy O’Ghaffari as well as Sharath Mahavadi and Jean Elkhoury of the Schlumberger-Doll Research Center.
Drilling down
Basalt is a type of erupted volcanic rock that is found in places such as Hawaii and Iceland. When fresh, it’s highly porous, with many pores, cracks, and fractures running through the rock. The material also is highly concentrated in iron, calcium, and magnesium. When these elements come in contact with fluid that is rich in carbon dioxide, they can dissolve and mix with CO2, and eventually form a new carbon-based mineral such as calcite or dolomite.
A project based in Iceland and piloted by the company CarbFix is currently injecting CO2-rich water into the region’s underground basalt to see how much of the gas can be converted and stored as minerals in the rock. The company’s runs have shown that more than 95 percent of the CO2 injected into the ground turns into minerals within two years. The project is proving that the chemistry works: CO2 can be stored as stone.
But the MIT team wondered how this mineralization process would change the basalt itself and its capacity to store carbon over time.
“Most studies investigating carbon mineralization have focused on optimizing the geochemistry, but we wanted to know how mineralization would affect real reservoir rocks,” Peč says.
Rocky X-rays
The team set out to study how the permeability and porosity of basalt changes as carbonate-rich fluid is pumped into and mineralized throughout the rock.
“Porosity refers to the total amount of open space in the rock, which could be in the form of vesicles, or fractures that connect vesicles, or even areas between sand grains,” Simpson explains. “Because there is so much variability in porosity patterns, there is no one-to-one relationship between porosity and permeability. You could have a lot of pores that are not necessarily connected. So, even if 20 percent of the rock is porous, if they’re not connected, then permeability would be zero.”
“The details of that are important to understand for all these problems of injecting fluids into the subsurface,” Peč emphasizes.
For their experiments, the team used samples of basalt that Peč and others collected during a trip to Iceland in 2023. They placed small samples of basalt in a custom-built holder that they connected to two tubes, through which they flowed two different fluids, each containing a solution that, when mixed, quickly forms carbonate minerals. The team chose this combination of fluids in order to speed up the mineralization process.
In the actual process of injecting CO2 into the ground, CO2 is mixed with water. When it is pumped through rock, the fluid first goes through a “dissolution” phase, in which it draws elements such as iron, calcium, and magnesium out from the basalt and into the CO2-rich fluid. This dissolution process can take some time, before the mineralization process, in which CO2 mixes with the drawn-out elements, can proceed.
The researchers used two different fluids that quickly mineralize when combined, in order to skip over the dissolution phase and efficiently study the effects of the mineralization process. The team was able to see the mineralization process occurring within the rock, at an unprecedented level of detail, by performing experiments inside an X-ray CT scanner. The team set up their experiment in a CT scanner (similar to the ones used for medical imaging in hospitals) and took frequent, high-resolution, three-dimensional snapshots of the basalt periodically over several days to weeks as they flowed the fluids through.
Their imaging revealed how the pores, cracks, and crevices in the rock evolved, and filled in with minerals as the fluid flowed through over time. Over multiple experiments, they found that the rock’s permeability quickly dropped within a day, by an order of magnitude. The rock’s porosity, however, decreased at a much slower rate. At the end of the longest-duration experiments, only about 5 percent of the original pore space was filled with new minerals.
“Our findings tell us that the minerals are initially forming in really small microcracks that connect the bigger pore spaces, and clogging up those spaces,” Simpson says. “You don’t need much to clog up the tiny microfractures. But when you do clog them up, that really drops the permeability.”
Even after the initial drop in permeability, however, the team could continue to flow fluid through, and minerals continued to form in tight spaces within the rock. This suggests that even when it seems like an underground reservoir is full, it might still be able to store more carbon.
The researchers also monitored the rock with ultrasonic sensors during each experiment and found that the sensor could track even small changes in the rock’s porosity. The less porous, or more filled in the rock was with minerals, the faster sound waves traveled through the material. These results suggest that seismic waves could be a reliable way to monitor the porosity of underground rocks and ultimately their capacity to store carbon.
“Overall, we think that carbon mineralization seems like a promising avenue to permanently store large volumes of CO2,” Peč concludes. “There are plenty of reservoirs and they should be injectable over extended periods of time if our results can be extrapolated.”
This work was supported by MIT’s Advanced Carbon Mineralization Initiative funded by Beth Siegelman SM ’84 and Russ Siegelman ’84, with additional funding from the Chan-Zuckerberg Foundation.
New insights into a hidden process that protects cells from harmful mutationsResearch reveals how cells may activate a compensation system that can reduce the effects of harmful genetic mutations. This could inform gene therapy development.Some genetic mutations that are expected to completely stop a gene from working surprisingly cause only mild or even no symptoms. Researchers in previous studies have discovered one reason why: Cells can ramp up the activity of other genes that perform similar functions to make up for the loss of an important gene’s function.
A new study published Feb. 12 in the journal Science by researchers in the lab of Jonathan Weissman, an MIT professor of biology and Whitehead Institute for Biomedical Research member, now reveals insights into how cells can coordinate this compensation response.
Cells are constantly reading instructions stored in DNA. These instructions, called genes, tell them how to make the many proteins that carry out complex processes needed to sustain life. But first, they need to make a temporary copy of these genetic instructions called messenger RNA, or mRNA.
As part of normal maintenance, cells routinely break down these temporary messages. This process helps control gene activity — or how much protein is made from a given gene — and ensures that old or unnecessary messages don’t accumulate. Cells also destroy faulty mRNAs that contain errors. These messages, if used, could produce damaged proteins that clump together and interfere with normal cellular processes.
In 2019, external studies suggested that this cleanup could be serving as more than just a quality-control check. Researchers showed that when faulty mRNAs are broken down, this breakdown can signal cells to activate the compensation response. These works also suggested that cells decide which backup genes to turn up based on how closely these genes resemble the mRNA that’s being degraded.
But mRNA decay is a process that happens in the cytoplasm, outside the nucleus where DNA, and thereby genes, are stored. So, Mohamed El-Brolosy, a postdoc in the Weissman Lab and lead author of the study, and colleagues wondered how those two processes in different compartments of the cell could be connected. Understanding this mechanism with greater depth could enable development of therapeutics that trigger it in a targeted fashion.
The researchers started by investigating a specific gene that scientists know triggers a compensation response when its mRNA is destroyed by causing a closely related gene to become more active. To find out which molecules within the cell aid this process, the researchers systematically switched other genes off, one at a time.
That’s when they found a protein called ILF3. When the gene encoding this protein was turned off, cells could no longer ramp up the activity of the backup gene following mRNA decay.
Upon further investigation, the researchers identified small RNA fragments — left behind when faulty mRNAs are destroyed — underlying this response. These fragments contain a special sequence that acts like an “address.” The team proposed that this address guides ILF3 to related backup genes that share the same sequence as the faulty mRNA.
In fact, when they introduced mutations in this sequence, the cells’ compensation response dropped, suggesting that the system relies on precise sequence matching to target the correct backup genes.
“That was very exciting for us,” says Weissman, who is also an investigator at the Howard Hughes Medical Institute. “It showed us that this isn’t a generic stress response. It’s a regulated system.”
The researchers’ findings point toward new therapeutic possibilities, where boosting the activity of a related gene could mitigate symptoms of certain genetic diseases. More broadly, their work characterizes a mysterious layer of gene regulation.
When the densest objects in the universe collide and merge, the violence sets off ripples, in the form of gravitational waves, that reverberate across space and time, over hundreds of millions and even billions of years. By the time they pass through Earth, such cosmic ripples are barely discernible.
And yet, scientists are able to detect them, thanks to a global network of gravitational-wave observatories: the U.S.-based National Science Foundation Laser Interferometer Gravitational-Wave Observatory (NSF LIGO), the Virgo interferometer in Italy, and the Kamioka Gravitational Wave Detector (KAGRA) in Japan. Together, the observatories “listen” for faint wobbles in the gravitational field that could have come from far-off astrophysical smash-ups.
Now the LIGO-Virgo-KAGRA (LVK) Collaboration is publishing its latest compilation of gravitational-wave detections, presented in a forthcoming special issue of Astrophysical Journal Letters. From the findings, it appears that the universe is echoing all over with a kaleidoscope of cosmic collisions.
The LVK’s Gravitational-Wave Transient Catalog-4.0 (GWTC-4) comprises detections of gravitational waves from a portion of the observatories’ fourth and most recent observing run, which occurred between May 2023 and January 2024. During this nine-month period, the observatories detected 128 new gravitational-wave “candidates,” meaning that the signals are likely from extreme, far-off astrophysical sources. (The LVK detected about 300 mergers so far in the fourth run, but not all of these appear yet in the LVK catalog.)
This newest crop more than doubles the size of the gravitational-wave catalog, which previously contained 90 candidates compiled from all three previous observing runs.
“The beautiful science that we are able to do with this catalog is enabled by significant improvements in the sensitivity of the gravitational-wave detectors as well as more powerful analysis techniques,” says LVK member Nergis Mavalvala, who is dean of the MIT School of Science and the Curtis and Kathleen Marble Professor of Astrophysics.
“In the past decade, gravitational wave astronomy has progressed from the first detection to the observation of hundreds of black hole mergers,” says Stephen Fairhurst, a professor at Cardiff University and LIGO Scientific Collaboration spokesperson. “These observations enable us to better understand how black holes form from the collapse of massive stars, probe the cosmological evolution of the universe and provide increasingly rigorous confirmations of the theory of general relativity.”
“Pushing the edges”
Black holes are created when all the matter in a dying star collapses into a single point. Black holes are therefore among the densest objects in the universe. Black holes often form in pairs, bound together through the gravitational attraction. As they spiral toward each other, they emit enormous amounts of energy in the form of gravitational waves, before merging into a more massive black hole.
A binary black hole was the source of the very first gravitational-wave detection, made by NSF’s LIGO observatories in 2015, and colliding black holes are the source of many of the gravitational waves detected since then. Such “bread-and-butter” binaries typically consist of two black holes of similar size (usually several tens of times more massive than the sun) that merge into one larger black hole.
Gravitational waves can also be produced by the collision of a black hole with a neutron star, which is an extremely dense remnant core of a massive star. While the collision of two black holes only produces gravitational waves, a smash-up involving a neutron star can also generate light, which provides more information about the event that scientists can probe. In its first three observing runs, the LVK observatories detected signals from a handful of collisions involving a black hole and neutron star, as well as two collisions between two neutron stars.
The newest detections published today reveal a greater variety of binaries that produce gravitational waves. In addition to the black hole binaries, the updated catalog includes the heaviest black hole binary; a binary with black holes of asymmetric, lopsided masses; and a binary where both black holes have exceptionally high spins. The catalog also holds two black hole-neutron star binaries.
“The message from this catalog is: We are expanding into new parts of what we call ‘parameter space’ and a whole new variety of black holes,” says co-author Daniel Williams, a research fellow at the University of Glasgow and a member of the LVK. “We are really pushing the edges, and are seeing things that are more massive, spinning faster, and are more astrophysically interesting and unusual.”
Unusual signals
The LIGO, Virgo, and KAGRA observatories detect gravitational waves using L-shaped, kilometer-scale instruments, called interferometers. Scientists send laser light down the length of each tunnel and precisely measure the time it takes each beam to return to its source. Any slight difference in their timing can mean that a gravitational wave passed through and minutely wobbled the laser’s light.
For the first segment of the LVK’s fourth observing run, gravitational-wave detections were made using only LIGO’s identical interferometers — one located in Hanford, Washington, and the other in Livingston, Louisiana. Recent upgrades to LIGO’s detectors enabled them to search for signals from binary neutron stars as far out as 360 megaparsecs, or about 1 billion light-years away, and for signals from binaries including black holes tens of times farther away.
“You can’t ever predict when a gravitational wave is going to come into your detector,” says co-author and LVK member Amanda Baylor, a graduate student at the University of Wisconsin at Milwaukee who was involved in the signal search process. “We could have five detections in one day, or one detection every 20 days. The universe is just so random.”
Among the more unusual signals that LIGO detected in the first phase of the O4 observing run was GW231123_135430, which is the heaviest black hole binary detected to date. Scientists estimate that the signal arose from the collision of two heavier-than-normal black holes, each roughly 130 times as massive as the sun. (Most of the detected merging black holes are around 30 solar masses.) The much heavier black holes of GW231123_135430 suggest that each may be a product of a prior collision of lighter “progenitor” black holes.
Another standout is GW231028_153006, which is a black hole binary with the highest inspiral spin, meaning that both black holes appear to be spinning very fast, at about 40 percent the speed of light. Again, scientists suspect that these black holes were also products of previous mergers that spun them up as they were created from two smaller, inspiraling black holes.
The O4 run also detected GW231118_005626 — an unusually lopsided pair, with one black hole twice as massive as the other.
“One of the striking things about our collection of black holes is their broad range of properties,” says co-author LVK member Jack Heinzel, an MIT graduate student who contributed to the catalog’s analysis. “Some of them are over 100 times the mass of our sun, others are as small as only a few times the mass of the sun. Some black holes are rapidly spinning, others have no measurable spin. We still don’t completely understand how black holes form in the universe, but our observations offer a crucial insight into these questions.”
Cosmic connections
From the newest gravitational-wave detections, scientists have begun to make connections about the properties of black holes as a population.
“For instance, this dataset has increased our belief that black holes that collided earlier in the history of the universe could more easily have had larger spins than the ones that collided later,” says LVK member Salvatore Vitale, associate professor of physics at MIT and member of the MIT LIGO Lab.
This idea raises interesting questions about what sort of conditions could have spun up black holes in the early universe.
The new detections have also allowed scientists to test Albert Einstein’s general theory of relativity, which describes gravity as a geometric property of space and time.
“Black holes are one of the most iconic and mind-bending predictions of general relativity,” says co-author and LVK member Aaron Zimmerman, associate professor of physics at the University of Texas at Austin, adding that when black holes collide, they “shake up space and time more intensely than almost any other process we can imagine observing. When testing our physical theories, it’s good to look at the most extreme situations we can, since this is where our theories are most likely to break down, and where we have the best chance of discovery.”
Scientists put Einstein’s theory to the test using GW230814_230901, which is one of the “loudest” gravitational-wave signals observed to date. The surprisingly clear signal gave scientists a chance to probe it in detail, to see if any aspects of the signal might deviate from what Einstein’s theory predicts. This signal pushed the limits of their tests of general relativity, passing most with flying colors but illustrating how environmental noise can challenge others in such an extreme scenario.
“So far, the theory is passing all our tests,” Zimmerman says. “But we’re also learning that we have to make even more accurate predictions to keep up with all the data the universe is giving us.”
The updated catalog is also helping scientists to nail down a key mystery in cosmology: How fast is the universe expanding today? Scientists have tried to answer this by measuring a rate known as the Hubble constant. Various methods, using different astrophysical sources, have given conflicting answers.
Gravitational waves offer an alternative way to measure the Hubble constant, since scientists are able to work out, in relatively straightforward fashion, how far these waves traveled from their source.
“Merging black holes have a really unique property: We can tell how far away they are from Earth just from analyzing their signals,” says co-author and LVK member Rachel Gray, a lecturer at the University of Glasgow who was involved in the cosmological interpretations of the catalog’s data. “So, every merging black hole gives us a measurement of the Hubble constant, and by combining all of the gravitational wave sources together, we can vastly improve how accurate this measurement is.”
By analyzing all the gravitational-wave detections in the LVK’s entire catalog, scientists have come up with a new, independent estimate of the Hubble constant, that suggests the universe is expanding at a rate of 76 kilometers, per second, per megaparsec (a square volume of about half a billion light-years wide).
“It’s still early days for this method, and we expect to significantly improve our precision as we detect more gravitational wave sources,” Gray says.
“Each new gravitational-wave detection allows us to unlock another piece of the universe’s puzzle in ways we couldn’t just a decade ago,” says Lucy Thomas, who led part of the catalog’s analysis, and is a postdoc in the Caltech LIGO Lab. “It’s incredibly exciting to think about what astrophysical mysteries and surprises we can uncover with future observing runs."
W.M. Keck Foundation to support research on healthy aging at MITAssistant Professor Alison Ringel will investigate the intersection of immunology and aging biology, aiming to define mechanisms that underlie aging-related decline, thanks to a grant from the foundation.A prestigious grant from the W.M. Keck Foundation to Alison E. Ringel, an MIT assistant professor of biology, will support groundbreaking healthy aging research at the Institute.
Ringel, who is also a core member of the Ragon Institute of Mass General Brigham, MIT, and Harvard, will draw on her background in cancer immunology to create a more comprehensive biomedical understanding of the cause and possible treatments for aging-related decline.
“It is such an honor to receive this grant,” Ringel says. “This support will enable us to draw new connections between immunology and aging biology. As the U.S. population grows older, advancing this research is increasingly important, and this line of inquiry is only possible because of the W.M. Keck Foundation.”
Understanding how to extend healthy years of life is a fundamental question of biomedical research with wide-ranging societal implications. Although modern science and medicine have greatly expanded global life expectancy, it remains unclear why everyone ages differently; some maintain physical and cognitive fitness well into old age, while others become debilitatingly frail later in life.
Our immune systems are adaptable, but they do naturally decline as we get older. One critical component of our immune system is CD8+ T cells, which are known to target and destroy cancerous or damaged cells. As we age, our tissues accumulate cells that can no longer divide. These senescent cells are present throughout our lives, but reach seemingly harmful levels as a normal part of aging, causing tissue damage and diminished resilience under stress.
There is now compelling evidence that the immune system plays a more active role in aging than previously thought.
“Decades of research have revealed that T cells can eliminate cancer cells, and studies of how they do so have led directly to the development of cancer immunotherapy,” Ringel says. “Building on these discoveries, we can now ask what roles T cells play in normal aging, where the accumulation of senescent cells, which are remarkably similar to cancer cells in some respects, may cause health problems later in life.”
In animal models, reconstituting elements of a young immune system has been shown to improve age-related decline, potentially due to CD8+ T cells selectively eliminating senescent cells. CD8+ T cells progressively losing the ability to cull senescent cells could explain some age-related pathology.
Ringel aims to build models for the express purpose of tracking and manipulating T cells in the context of aging and to evaluate how T cell behavior changes over a lifespan.
“By defining the protective processes that slow aging when we are young and healthy, and defining how these go awry in older adults, our goal is to generate knowledge that can be applied to extend healthy years of life,” Ringel says. “I’m really excited about where this research can take us.”
The W.M. Keck Foundation was established in 1954 in Los Angeles by William Myron Keck, founder of The Superior Oil Co. One of the nation’s largest philanthropic organizations, the W.M. Keck Foundation supports outstanding science, engineering, and medical research. The foundation also supports undergraduate education and maintains a program within Southern California to support arts and culture, education, health, and community service projects.
Study reveals climatic fingerprints of wildfires and volcanic eruptionsIn research that could help elucidate humans’ role in global warming, scientists showed how three major natural events impacted global atmospheric temperatures.Volcanoes and wildfires can inject millions of tons of gases and aerosol particles into the air, affecting temperatures on a global scale. But picking out the specific impact of individual events against a background of many contributing factors is like listening for one person’s voice from across a crowded concourse.
MIT scientists now have a way to quiet the noise and identify the specific signal of wildfires and volcanic eruptions, including their effects on Earth’s global atmospheric temperatures.
In a study appearing this week in the Proceedings of the National Academy of Sciences, the researchers report that they detected statistically significant changes in global atmospheric temperatures in response to three major natural events: the eruption of Mount Pinatubo in 1991, the Australian wildfires in 2019-2020, and the eruption of the underwater volcano Hunga Tonga in the South Pacific in 2022.
While the specifics of each event differed, all three events appeared to significantly affect temperatures in the stratosphere. The stratosphere lies above the troposphere, which is the lowest layer of the atmosphere, closest to the surface, where global warming has accelerated in recent years. In the new study, Pinatubo showed the classic pattern of stratospheric warming paired with tropospheric cooling. The Australian wildfires and the Hunga Tonga eruption also showed significant warming or cooling in the stratosphere, respectively, but they did not produce a robust, globally detectable tropospheric signal over the first two years following each event. This new understanding will help scientists further pin down the effect of human-related emissions on global temperature change.
“Understanding the climate responses to natural forcings is essential for us to interpret anthropogenic climate change,” says study author Yaowei Li, a former postdoc and currently a visiting scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “Unlike the global tropospheric and surface cooling caused by Pinatubo, our results also indicate that the Australian wildfires and Hunga Tonga eruption may not have played a role in the acceleration of global surface warming in recent years. So, there must be some other factors.”
The study’s co-authors include Susan Solomon, the Lee and Geraldine Martin Professor of Environmental Studies and Chemistry at MIT, along with Benjamin Santer of the University of East Anglia, David Thompson of the University of East Anglia and Colorado State University, and Qiang Fu of the University of Washington.
Extraordinary events
The past several years have set back-to-back records for global average surface temperatures. The World Meteorological Organization recently confirmed that the years 2023 to 2025 were the three warmest years on record, while the past 11 years have been the 11 warmest years ever recorded. The world is warming, due mainly to human activities that have emitted huge amounts of greenhouse gases into the atmosphere over centuries.
In addition to greenhouse gases, the atmosphere has been on the receiving end of other large-scale emissions, including sulfur gases and water vapor from volcanic eruptions and smoke particles from wildfires. Li and his colleagues have wondered whether such natural events could have any global impact on temperatures, and whether such an effect would be detectable.
“These events are extraordinary and very unique in terms of the different materials they inject into different altitudes,” Li says. “So we asked the question: Do these events actually perturb the global temperature to a degree that could be identifiable from natural, meteorological noise, and could they contribute to some of the exceptional global surface warming we’ve seen in the last few years?”
In particular, the team looked for signals of global temperature change in response to three large-scale natural events. The Pinatubo eruption resulted in around 20 million tons of volcanic aerosols in the stratosphere, which was the largest volume ever recorded by modern satellite instruments. The Australian fires injected around 1 million tons of smoke particles into the upper troposphere and stratosphere. And the Hunga Tonga eruption produced the largest atmospheric explosion on satellite record, launching nearly 150 million tons of water vapor into the stratosphere.
If any natural event could measurably shift global temperatures, the team reasoned, it would be any of these three.
Natural signals
For their new study, the team took a signal-to-noise approach. They looked to minimize “noise” from other known influences on global temperatures in order to isolate the “signal,” such as a change in temperature associated specifically with one of the three natural events.
To do so, they looked first through satellite measurements taken by the Stratospheric Sounding Unit (SSU) and the Microwave and Advanced Microwave Sounding Units (MSU), which have been measuring global temperatures at different altitudes throughout the atmosphere since 1979. The team compiled SSU and MSU measurements from 1986 to the present day. From these measurements, the researchers could see long-term trends of steady tropospheric warming and stratospheric cooling. Those long-term trends are largely associated with anthropogenic greenhouse gases, which the team subtracted from the dataset.
What was left over was more of a level baseline, which still contained some confounding noise, in the form of natural variability. Global temperature changes can also be affected by phenomena such as El Niño and La Niña, which naturally warm and cool the Earth every few years. The sun also swings global temperatures on a roughly 11-year cycle. The team took this natural variability into account, and subtracted out the effects of these influences.
After minimizing such noise from their dataset, the team reasoned that whatever temperature changes remained could be more easily traced to the three large-scale natural events and quantified. And indeed, when they pinned the events to the temperature measurements, at the times that they occurred, they could plainly see how each event influenced temperatures around the world.
The team found that Pinatubo decreased global tropospheric temperatures by up to about 0.7 degree Celsius, for more than two years following the eruption. The volcanic sulfate aerosols essentially acted as many tiny reflectors, cooling the troposphere and surface by scattering sunlight back into space. At the same time, the aerosols, which remained in the stratosphere, also absorbed heat that was emitted from the surface, subsequently warming the stratosphere.
This finding agreed with many other studies of the event, which confirmed that the team’s approach is accurate. They applied the same method to the 2019-2020 Australian wildfires, and the 2022 underwater eruption — events where the influence on global temperatures is less clear.
For the Australian wildfires, they found that the smoke particles caused the global stratosphere to warm up, by up to about 0.77 degree Celsius, which persisted for about five months but did not produce a clear global tropospheric signal.
“In the end we found that the wildfire smoke caused a very strong warming in the stratosphere, because these materials are very different chemically from sulfate,” Li explains. “They are particles that are dark colored, meaning they are efficient at absorbing solar radiation. So, a relatively small amount of smoke particles can cause a dramatic warming.”
In the case of the Hunga Tonga, the underwater eruption triggered a global cooling effect in the middle-to-upper stratosphere, of up to about half a degree Celsius, lasting for several years.
“The Australian fires and the Hunga Tonga really packed a punch at stratospheric altitudes, and this study shows for the first time how to quantify how strong that punch was,” says Solomon. “I find their impact up high quite remarkable, but the ongoing issue is why the last several years have been so warm lower down, in the troposphere — ruling out those natural events points even more strongly at human influences.”
3 Questions: Exploring the mechanisms underlying changes during infectionZuri Sullivan, a new assistant professor of biology and Whitehead Institute member, studies why we get sick, and whether aspects of illness, such as disrupted appetite, contribute to host defense.With respiratory illness season in full swing, a bad night’s sleep, sore throat, and desire to cancel dinner plans could all be considered hallmark symptoms of the flu, Covid-19 or other illnesses. Although everyone has, at some point, experienced illness and these stereotypical symptoms, the mechanisms that generate them are not well understood.
Zuri Sullivan, a new assistant professor in the MIT Department of Biology and core member of the Whitehead Institute for Biomedical Research, works at the interface of neuroscience, microbiology, physiology, and immunology to study the biological workings underlying illness. In this interview, she describes her work on immunity thus far as well as research avenues — and professional collaborations — she’s excited to explore at MIT.
Q: What is immunity, and why do we get sick in the first place?
A: We can think of immunity in two ways: the antimicrobial programs that defend against a pathogen directly, and sickness, the altered organismal state that happens when we get an infection.
Sickness itself arises from brain-immune system interaction. The immune system is talking to the brain, and then the brain has a system-wide impact on host defense via its ability to have top-down control of physiologic systems and behavior. People might assume that sickness is an unintended consequence of infection, that it happens because your immune system is active, but we hypothesize that it’s likely an adaptive process that contributes to host defense.
If we consider sickness as immunity at the organismal scale, I think of my work as bridging the dynamic immunological processes that occur at the cellular scale, the tissue scale, and the organismal scale. I’m interested in the molecular and cellular mechanisms by which the immune system communicates with the brain to generate changes in behavior and physiology, such as fever, loss of appetite, and changes in social interaction.
Q: What sickness behaviors fascinate you?
A: During my thesis work at Yale University, I studied how the gut processes different nutrients and the role of the immune system in regulating gut homeostasis in response to different kinds of food. I’m especially interested in the interaction between food, the immune system, and the brain. One of the things I’m most excited about is the reduction in appetite, or changes in food choice, because we have what I would consider pretty strong evidence that these may be adaptive.
Sleep is another area we’re interested in exploring. From their own subjective experience, everyone knows that sleep is often altered during infection.
I also don’t just want to examine snapshots in time. I want to characterize changes over the course of an infection. There’s probably going to be individual variability, which I think may be in part because pathogens are also changing over the course of an illness — we’re studying two different biological systems interacting with each other.
Q: What sorts of expertise are you hoping to recruit to your lab, and what collaborations are you excited about pursuing?
A: I really want to bring together different areas of biology to think about organism-wide questions. The thing that’s most important to me is people who are creative — I’d rather trainees come in with an interesting idea than a perfectly formed question within the bounds of what we already believe to be true. I’m also interested in people who would complement my expertise; I’m fascinated by microbiology, but I don’t have any formal training.
The Whitehead Institute is really invested in interdisciplinary work, and there’s a natural synergy between my work and the other labs in this small community at the Whitehead Institute.
I’ve been collaborating with Sebastian Lourido’s lab for a few years, looking at how Toxoplasma gondii influences social behavior, and I’m excited to invest more time in that project. I’m also interested in molecular neuroscience, which is a focus of Siniša Hrvatin’s lab. That lab is interested in the hypothalamus, and trying to understand the mechanisms that generate torpor. My work also focuses on the hypothalamus because it regulates homeostatic behaviors that change during sickness, such as appetite, sleep, social behavior, and body temperature.
By studying different sickness states generated by different kinds of pathogens — parasites, viruses, bacteria — we can ask really interesting questions about how and why we get sick.
Fragile X study uncovers brain wave biomarker bridging humans and miceResearchers find mice modeling the autism spectrum disorder fragile X syndrome exhibit the same pattern of differences in low-frequency waves as humans — a new marker for treatment studies.Numerous potential treatments for neurological conditions, including autism spectrum disorders, have worked well in mice but then disappointed in humans. What would help is a non-invasive, objective readout of treatment efficacy that is shared in both species.
In a new open-access study in Nature Communications, a team of MIT researchers, backed by collaborators across the United States and in the United Kingdom, identifies such a biomarker in fragile X syndrome, the most common inherited form of autism.
Led by postdoc Sara Kornfeld-Sylla and Picower Professor Mark Bear, the team measured the brain waves of human boys and men, with or without fragile X syndrome, and comparably aged male mice, with or without the genetic alteration that models the disorder. The novel approach Kornfeld-Sylla used for analysis enabled her to uncover specific and robust patterns of differences in low-frequency brain waves between typical and fragile X brains shared between species at each age range. In further experiments, the researchers related the brain waves to specific inhibitory neural activity in the mice and showed that the biomarker was able to indicate the effects of even single doses of a candidate treatment for fragile X called arbaclofen, which enhances inhibition in the brain.
Both Kornfeld-Sylla and Bear praised and thanked colleagues at Boston Children’s Hospital, the Phelan-McDermid Syndrome Foundation, Cincinnati Children’s Hospital, the University of Oklahoma, and King’s College London for gathering and sharing data for the study.
“This research weaves together these different datasets and finds the connection between the brain wave activity that’s happening in fragile X humans that is different from typically developed humans, and in the fragile X mouse model that is different than the ‘wild-type’ mice,” says Kornfeld-Sylla, who earned her PhD in Bear’s lab in 2024 and continued the research as a FRAXA postdoc. “The cross-species connection and the collaboration really makes this paper exciting.”
Bear, a faculty member in The Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences at MIT, says having a way to directly compare brain waves can advance treatment studies.
“Because that is something we can measure in mice and humans minimally invasively, you can pose the question: If drug treatment X affects this signature in the mouse, at what dose does that same drug treatment change that same signature in a human?” Bear says. “Then you have a mapping of physiological effects onto measures of behavior. And the mapping can go both ways.”
Peaks and powers
In the study, the researchers measured EEG over the occipital lobe of humans and on the surface of the visual cortex of the mice. They measured power across the frequency spectrum, replicating previous reports of altered low-frequency brain waves in adult humans with fragile X and showing for the first time how these disruptions differ in children with fragile X.
To enable comparisons with mice, Kornfeld-Sylla subtracted out background activity to specifically isolate only “periodic” fluctuations in power (i.e., the brain waves) at each frequency. She also disregarded the typical way brain waves are grouped by frequency (into distinct bands with Greek letter designations delta, theta, alpha, beta, and gamma) so that she could simply juxtapose the periodic power spectra of the humans and mice without trying to match them band by band (e.g., trying to compare the mouse “alpha” band to the human one). This turned out to be crucial because the significant, similar patterns exhibited by the mice actually occurred in a different low-frequency band than in the humans (theta vs. alpha). Both species also had alterations in higher-frequency bands in fragile X, but Kornfeld-Sylla noted that the differences in the low-frequency brainwaves are easier to measure and more reliable in humans, making them a more promising biomarker.
So what patterns constitute the biomarker? In adult men and mice alike, a peak in the power of low-frequency waves is shifted to a significantly slower frequency in fragile X cases compared to in neurotypical cases. Meanwhile, in fragile X boys and juvenile mice, while the peak is somewhat shifted to a slower frequency, what is really significant is a reduced power in that same peak.
The researchers were also able to discern that the peak in question is actually made of two distinct subpeaks, and that the lower-frequency subpeak is the one that varies specifically with fragile X syndrome.
Curious about the neural activity underlying the measurements, the researchers engaged in experiments in which they turned off activity of two different kinds of inhibitory neurons that are known to help produce and shape brain wave patterns: somatostatin-expressing and parvalbumin-expressing interneurons. Manipulating the somatostatin neurons specifically affected the lower-frequency subpeak that contained the newly discovered biomarker in fragile X model mice.
Drug testing
Somatostatin interneurons exert their effects on the neurons they connect to via the neurotransmitter chemical GABA, and evidence from prior studies suggest that GABA receptivity is reduced in fragile X syndrome. A therapeutic approach pioneered by Bear and others has been to give the drug arbaclofen, which enhances GABA activity. In the new study, the researchers treated both control and fragile X model mice with arbaclofen to see how it affected the low-frequency biomarker.
Even the lowest administered single dose made a significant difference in the neurotypical mice, which is consistent with those mice having normal GABA responsiveness. Fragile X mice needed a higher dose, but after one was administered, there was a notable increase in the power of the key subpeak, reducing the deficit exhibited by juvenile mice.
The arbaclofen experiments therefore demonstrated that the biomarker provides a significant readout of an underlying pathophysiology of fragile X: the reduced GABA responsiveness. Bear also noted that it helped to identify a dose at which arbaclofen exerted a corrective effect, even though the drug was only administered acutely, rather than chronically. An arbaclofen therapy would, of course, be given over a long time frame, not just once.
“This is a proof of concept that a drug treatment could move this phenotype acutely in a direction that makes it closer to wild-type,” Bear says. “This effort reveals that we have readouts that can be sensitive to drug treatments.”
Meanwhile, Kornfeld-Sylla notes, there is a broad spectrum of brain disorders in which human patients exhibit significant differences in low-frequency (alpha) brain waves compared to neurotypical peers.
“Disruptions akin to the biomarker we found in this fragile X study might prove to be evident in mouse models of those other disorders, too,” she says. “Identifying this biomarker could broadly impact future translational neuroscience research.”
The paper’s other authors are Cigdem Gelegen, Jordan Norris, Francesca Chaloner, Maia Lee, Michael Khela, Maxwell Heinrich, Peter Finnie, Lauren Ethridge, Craig Erickson, Lauren Schmitt, Sam Cooke, and Carol Wilkinson.
The National Institutes of Health, the National Science Foundation, the FRAXA Foundation, the Pierce Family Fragile X Foundation, the Autism Science Foundation, the Thrasher Research Fund, Harvard University, the Simons Foundation, Wellcome, the Biotechnology and Biological Sciences Research Council, and the Freedom Together Foundation provided support for the research.
MIT faculty, alumni named 2026 Sloan Research FellowsAnnual award honors early-career researchers for creativity, innovation, and research accomplishments.Eight MIT faculty and 22 additional MIT alumni are among 126 early-career researchers honored with 2026 Sloan Research Fellowships by the Alfred P. Sloan Foundation.
The fellowships honor exceptional researchers at U.S. and Canadian educational institutions, whose creativity, innovation, and research accomplishments make them stand out as the next generation of leaders. Winners receive a two-year, $75,000 fellowship that can be used flexibly to advance the fellow’s research.
"The Sloan Research Fellows are among the most promising early-career researchers in the U.S. and Canada, already driving meaningful progress in their respective disciplines," says Stacie Bloom, president and chief executive officer of the Alfred P. Sloan Foundation. "We look forward to seeing how these exceptional scholars continue to unlock new scientific advancements, redefine their fields, and foster the well-being and knowledge of all."
Including this year’s recipients, a total of 341 MIT faculty have received Sloan Research Fellowships since the program’s inception in 1955. The MIT recipients are:
Jacopo Borga is interested in probability theory and its connections to combinatorics, and in mathematical physics. He studies various random combinatorial structures — mathematical objects such as graphs or permutations — and their patterns and behavior at a large scale. This research includes random permutons, meanders, multidimensional constrained Brownian motions, Schramm-Loewner evolutions, and Liouville quantum gravity. Borga earned bachelor’s and master’s degrees in mathematics from the Università degli Studi di Padova in Italy, and a master’s degree in mathematics from Université Sorbonne Paris Cité in France, then proceeded to complete a PhD in mathematics at Unstitut für Mathematik at the Universität Zürich in Switzerland. Borga was an assistant professor at Stanford University before joining MIT as an assistant professor of mathematics in 2024.
Anna-Christina Eilers is an astrophysicist and assistant professor at MIT’s Department of Physics as well as a member of the MIT Kavli Institute for Astrophysics and Space Research. Her work explores how black holes form and evolve across cosmic time, studying their origins and the role they play in shaping our universe. She leverages multi-wavelength data from telescopes all around the world and in space to study how the first galaxies, black holes, and quasars emerged during an epoch known as the Cosmic Dawn of our universe. She grew up in Germany and completed her PhD at the Max Planck Institute for Astronomy in Heidelberg. Subsequently, she was awarded a NASA Hubble Fellowship and a Pappalardo Fellowship to continue her research at MIT, where she joined the faculty in 2023. Her work has been recognized with several honors, including the PhD Prize of the International Astronomical Union, the Otto Hahn Medal of the Max Planck Society, and the Ludwig Biermann Prize of the German Astronomical Society.
Linlin Fan is the Samuel A. Goldblith Career Development Assistant Professor of Applied Biology in the Department of Brain and Cognitive Sciences and the Picower Institute for Learning and Memory at MIT. Her lab focuses on the development and application of advanced all-optical physiological techniques to understand the plasticity mechanisms underlying learning and memory. She has developed and applied high-speed, cellular-precision all-optical physiological techniques for simultaneously mapping and controlling membrane potential in specific neurons in behaving mammals. Prior to joining MIT, Fan was a Helen Hay Whitney Postdoctoral Fellow in Karl Deisseroth’s laboratory at Stanford University. She obtained her PhD in chemical biology from Harvard University in 2019 with Adam Cohen. Her work has been recognized by several awards, including the Larry Katz Memorial Lecture Award from the Cold Spring Harbor Laboratory, Helen Hay Whitney Fellowship, Career Award at the Scientific Interface from the Burroughs Wellcome Fund, Klingenstein-Simons Fellowship Award, Searle Scholar Award, and NARSAD Young Investigator Award.
Yoon Kim is an associate professor in the Department of EECS and a principal investigator in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and the MIT-IBM Watson AI Lab, where he works on natural language processing and machine learning. Kim earned a PhD in computer science at Harvard University, an MS in data science from New York University, an MA in statistics from Columbia University, and BA in both math and economics from Cornell University. He joined EECS in 2021, after spending a year as a postdoc at MIT-IBM Watson AI Lab.
Haihao Lu PhD ’19 is the Cecil and Ida Green Career Development Assistant Professor, and an assistant professor of operations research/statistics at the MIT Sloan School of Management. Lu’s research lies at the intersection of optimization, computation, and data science, with a focus on pushing the computational and mathematical frontiers of large-scale optimization. Much of his work is inspired by real-world challenges faced by leading technology companies and optimization software companies, such as first-order methods and scalable solvers and data-driven optimization for resource allocation. His research has had real-world impact, generating substantial revenue and advancing the state of practice in large-scale optimization, and has been recognized by several research awards. Before joining MIT Sloan, he was an assistant professor at the University of Chicago Booth School of Business and a faculty researcher at Google Research’s large-scale optimization team. He obtained his PhD in mathematics and operations research at MIT in 2019.
Brett McGuire is the Class of 1943 Career Development Associate Professor of Chemistry at MIT. He completed his undergraduate studies at the University of Illinois at Urbana-Champaign before earning an MS from Emory University and a PhD from the Caltech, both in physical chemistry. After Jansky and Hubble postdoctoral fellowships at the National Radio Astronomy Observatory, he joined the MIT faculty in 2020 and was promoted to associate professor in 2025. The McGuire Group integrates physical chemistry, molecular spectroscopy, and observational astrophysics to explore how the chemical building blocks of life evolve alongside the formation of stars and planets.
Anand Natarajan PhD ’18 is an associate professor in EECS and a principal investigator in CSAIL and the MIT-IBM Watson AI Lab. His research is mainly in quantum complexity theory, with a focus on the power of interactive proofs and arguments in a quantum world. Essentially, his work attempts to assess the complexity of computational problems in a quantum setting, determining both the limits of quantum computers’ capability and the trustworthiness of their output. Natarajan earned his PhD in physics from MIT, and an MS in computer science and BS in physics from Stanford University. Prior to joining MIT in 2020, he spent time as a postdoc at the Institute for Quantum Information and Matter at Caltech.
Mengjia Yan is an associate professor in the Department of EECS and a principal investigator in CSAIL. She is a security computer architect whose research advances secure processor design by bridging computer architecture, systems security, and formal methods. Her work identifies critical blind spots in hardware threat models and improves the resilience of real-world systems against information leakage and exploitation. Several of her discoveries have influenced commercial processor designs and contributed to changes in how hardware security risks are evaluated in practice. In parallel, Yan develops architecture-driven techniques to improve the scalability of formal verification and introduces new design principles toward formally verifiable processors. She also designed the Secure Hardware Design (SHD) course, now widely adopted by universities worldwide to teach computer architecture security from both offensive and defensive perspectives.
The following MIT alumni also received fellowships:
Ashok Ajoy PhD ’16
Chibueze Amanchukwu PhD ’17
Annie M. Bauer PhD ’17
Kimberly K. Boddy ’07
danah boyd SM ’02
Yuan Cao SM ’16, PhD ’20
Aloni Cohen SM ’15, PhD ’19
Fei Dai PhD ’19
Madison M. Douglas ’16
Philip Engel ’10
Benjamin Eysenbach ’17
Tatsunori B. Hashimoto SM ’14, PhD ’16
Xin Jin ’10
Isaac Kim ’07
Christina Patterson PhD ’19
Katelin Schutz ’14
Karthik Shekhar PhD ’15
Shriya S. Srinivasan PhD ’20
Jerzy O. Szablowski ’09
Anna Wuttig PhD ’18
Zoe Yan PhD ’20
Lingfu Zhang ’18
By now, ChatGPT, Claude, and other large language models have accumulated so much human knowledge that they’re far from simple answer-generators; they can also express abstract concepts, such as certain tones, personalities, biases, and moods. However, it’s not obvious exactly how these models represent abstract concepts to begin with from the knowledge they contain.
Now a team from MIT and the University of California San Diego has developed a way to test whether a large language model (LLM) contains hidden biases, personalities, moods, or other abstract concepts. Their method can zero in on connections within a model that encode for a concept of interest. What’s more, the method can then manipulate, or “steer” these connections, to strengthen or weaken the concept in any answer a model is prompted to give.
The team proved their method could quickly root out and steer more than 500 general concepts in some of the largest LLMs used today. For instance, the researchers could home in on a model’s representations for personalities such as “social influencer” and “conspiracy theorist,” and stances such as “fear of marriage” and “fan of Boston.” They could then tune these representations to enhance or minimize the concepts in any answers that a model generates.
In the case of the “conspiracy theorist” concept, the team successfully identified a representation of this concept within one of the largest vision language models available today. When they enhanced the representation, and then prompted the model to explain the origins of the famous “Blue Marble” image of Earth taken from Apollo 17, the model generated an answer with the tone and perspective of a conspiracy theorist.
The team acknowledges there are risks to extracting certain concepts, which they also illustrate (and caution against). Overall, however, they see the new approach as a way to illuminate hidden concepts and potential vulnerabilities in LLMs, that could then be turned up or down to improve a model’s safety or enhance its performance.
“What this really says about LLMs is that they have these concepts in them, but they’re not all actively exposed,” says Adityanarayanan “Adit” Radhakrishnan, assistant professor of mathematics at MIT. “With our method, there’s ways to extract these different concepts and activate them in ways that prompting cannot give you answers to.”
The team published their findings today in a study appearing in the journal Science. The study’s co-authors include Radhakrishnan, Daniel Beaglehole and Mikhail Belkin of UC San Diego, and Enric Boix-Adserà of the University of Pennsylvania.
A fish in a black box
As use of OpenAI’s ChatGPT, Google’s Gemini, Anthropic’s Claude, and other artificial intelligence assistants has exploded, scientists are racing to understand how models represent certain abstract concepts such as “hallucination” and “deception.” In the context of an LLM, a hallucination is a response that is false or contains misleading information, which the model has “hallucinated,” or constructed erroneously as fact.
To find out whether a concept such as “hallucination” is encoded in an LLM, scientists have often taken an approach of “unsupervised learning” — a type of machine learning in which algorithms broadly trawl through unlabeled representations to find patterns that might relate to a concept such as “hallucination.” But to Radhakrishnan, such an approach can be too broad and computationally expensive.
“It’s like going fishing with a big net, trying to catch one species of fish. You’re gonna get a lot of fish that you have to look through to find the right one,” he says. “Instead, we’re going in with bait for the right species of fish.”
He and his colleagues had previously developed the beginnings of a more targeted approach with a type of predictive modeling algorithm known as a recursive feature machine (RFM). An RFM is designed to directly identify features or patterns within data by leveraging a mathematical mechanism that neural networks — a broad category of AI models that includes LLMs — implicitly use to learn features.
Since the algorithm was an effective, efficient approach for capturing features in general, the team wondered whether they could use it to root out representations of concepts, in LLMs, which are by far the most widely used type of neural network and perhaps the least well-understood.
“We wanted to apply our feature learning algorithms to LLMs to, in a targeted way, discover representations of concepts in these large and complex models,” Radhakrishnan says.
Converging on a concept
The team’s new approach identifies any concept of interest within a LLM and “steers” or guides a model’s response based on this concept. The researchers looked for 512 concepts within five classes: fears (such as of marriage, insects, and even buttons); experts (social influencer, medievalist); moods (boastful, detachedly amused); a preference for locations (Boston, Kuala Lumpur); and personas (Ada Lovelace, Neil deGrasse Tyson).
The researchers then searched for representations of each concept in several of today’s large language and vision models. They did so by training RFMs to recognize numerical patterns in an LLM that could represent a particular concept of interest.
A standard large language model is, broadly, a neural network that takes a natural language prompt, such as “Why is the sky blue?” and divides the prompt into individual words, each of which is encoded mathematically as a list, or vector, of numbers. The model takes these vectors through a series of computational layers, creating matrices of many numbers that, throughout each layer, are used to identify other words that are most likely to be used to respond to the original prompt. Eventually, the layers converge on a set of numbers that is decoded back into text, in the form of a natural language response.
The team’s approach trains RFMs to recognize numerical patterns in an LLM that could be associated with a specific concept. As an example, to see whether an LLM contains any representation of a “conspiracy theorist,” the researchers would first train the algorithm to identify patterns among LLM representations of 100 prompts that are clearly related to conspiracies, and 100 other prompts that are not. In this way, the algorithm would learn patterns associated with the conspiracy theorist concept. Then, the researchers can mathematically modulate the activity of the conspiracy theorist concept by perturbing LLM representations with these identified patterns.
The method can be applied to search for and manipulate any general concept in an LLM. Among many examples, the researchers identified representations and manipulated an LLM to give answers in the tone and perspective of a “conspiracy theorist.” They also identified and enhanced the concept of “anti-refusal,” and showed that whereas normally, a model would be programmed to refuse certain prompts, it instead answered, for instance giving instructions on how to rob a bank.
Radhakrishnan says the approach can be used to quickly search for and minimize vulnerabilities in LLMs. It can also be used to enhance certain traits, personalities, moods, or preferences, such as emphasizing the concept of “brevity” or “reasoning” in any response an LLM generates. The team has made the method’s underlying code publicly available.
“LLMs clearly have a lot of these abstract concepts stored within them, in some representation,” Radhakrishnan says. “There are ways where, if we understand these representations well enough, we can build highly specialized LLMs that are still safe to use but really effective at certain tasks.”
This work was supported, in part, by the National Science Foundation, the Simons Foundation, the TILOS institute, and the U.S. Office of Naval Research.
New study unveils the mechanism behind “boomerang” earthquakesThese ricocheting ruptures may be more common than previously thought.An earthquake typically sets off ruptures that ripple out from its underground origins. But on rare occasions, seismologists have observed quakes that reverse course, further shaking up areas that they passed through only seconds before. These “boomerang” earthquakes often occur in regions with complex fault systems. But a new study by MIT researchers predicts that such ricochet ruptures can occur even along simple faults.
The study, which appears today in the journal AGU Advances, reports that boomerang earthquakes can happen along a simple fault under several conditions: if the quake propagates out in just one direction, over a large enough distance, and if friction along the rupturing fault builds and subsides rapidly during the quake. Under these conditions, even a simple straight fault, like some segments of the San Andreas fault in California, could experience a boomerang quake.
These newly identified conditions are relatively common, suggesting that many earthquakes that have occurred along simple faults may have experienced a boomerang effect, or what scientists term “back-propagating fronts.”
“Our work suggests that these boomerang quakes may have been undetected in a number of cases,” says study author Yudong Sun, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “We do think this behavior may be more common than we have seen so far in the seismic data.”
The new results could help scientists better assess future hazards in simple fault zones where boomerang quakes could potentially strike twice.
“In most cases, it would be impossible for a person to tell that an earthquake has propagated back just from the ground shaking, because ground motion is complex and affected by many factors,” says co-author Camilla Cattania, the Cecil and Ida Green Career Development Professor of Geophysics at MIT. “However, we know that shaking is amplified in the direction of rupture, and buildings would shake more in response. So there is a real effect in terms of the damage that results. That’s why understanding where these boomerang events could occur matters.”
Keep it simple
There have been a handful of instances where scientists have recorded seismic data suggesting that a quake reversed direction. In 2016, an earthquake in the middle of the Atlantic Ocean rippled eastward, and then seconds later richocheted back west. Similar return rumblers may have occurred in 2011 during the magnitude 9 earthquake in Tohoku, Japan, and in 2023 during the destructive magnitude 7.8 quake in Turkey and Syria, among others.
These events took place in various fault regions, from complex zones of multiple intersecting fault lines to regions with just a single, straight fault. While seismologists have assumed that such complex quakes would be more likely to occur in multifault systems, the rare examples along simple faults got Sun and Cattania wondering: Could an earthquake reverse course along a simple fault? And if so, what could cause such a bounce-back in a seemingly simple system?
“When you see this boomerang-like behavior, it is tempting to explain this in terms of some complexity in the Earth,” Cattania says. “For instance, there may be many faults that interact, with earthquakes jumping between fault segments, or fault surfaces with prominent kinks and bends. In many cases, this could explain back-propagating behavior. But what we found was, you could have a very simple fault and still get this complex behavior.”

Faulty friction
In their new study, the team looked to simulate an earthquake along a simple fault system. In geology, a fault is a crack or fracture that runs through the Earth’s crust. An earthquake begins when the stress between rocks on either side of the fault, suddenly decreases, and one side slides against the other, setting off seismic waves that rupture rocks all along the fault. This seismic activity, which initiates deep in the crust, can sometimes reach and shake up the surface.
Cattania and Sun used a computer model to represent the fundamental physics at play during an earthquake along a simple fault. In their model, they simulated the Earth’s crust as a simple elastic material, in which they embedded a single straight fault. They then simulated how the fault would exhibit an earthquake under different scenarios. For instance, the team varied the length of the fault and the location of the quake’s initation point below the surface, as well as whether the quake traveled in one versus two directions.
Over multiple simulations, they observed that only the unilateral quakes — those that traveled in one direction — exhibited a boomerang effect. Specifically, these quakes seemed to include a type that seismologists term “back-propagating” events, in which the rumbler splits at some point along the fault, partly continuing in the same direction and partly reversing back the way it came.
“When you look at a simulation, sometimes you don’t fully understand what causes a given behavior,” Cattania says. “So we developed mathematical models to understand it. And we went back and forth, to ultimately develop a simple theory that tells you should only see this back-propagation under these certain conditions.”
Those conditions, as the team’s new theory lays out, have to do with the friction along the fault. In standard earthquake physics, it’s generally understood that an earthquake is triggered when the stress built up between rocks on either side of a fault, is suddenly released. Rocks slide against each other in response, decreasing a fault’s friction. The reduction in fault friction creates a positive feedback that facilitates further sliding, sustaining the earthquake.
However, in their simulations, the team observed that when a quake travels along a fault in one direction, it can back-propagate when friction along the fault goes down, then up, and then down again.
“When the quake propagates in one direction, it produces a “breaking’’ effect that reduces the sliding velocity, increases friction, and allows only a narrow section of the fault to slide at a time,” Cattania says. “The region behind the quake, which stops sliding, can then rupture again, because it has accumulated more stress to slide again.”
The team found that, in addition to traveling in one direction and along a fault with changing friction, a boomerang is likely to occur if a quake has traveled over a large enough distance.
“This implies that large earthquakes are not simply ‘scaled-up’ versions of small earthquakes, but instead they have their own unique rupture behavior,” Sun says.
The team suspects that back-propagating quakes may be more common than scientists have thought, and they may occur along simple, straight faults, which are typically older than more complex fault systems.
“You shouldn’t only expect this complex behavior on a young, complex fault system. You can also see it on mature, simple faults,” Cattania says. “The key open question now is how often rupture reversals, or ‘boomerang’ earthquakes, occur in nature. Many observational studies so far have used methods that can’t detect back-propagating fronts. Our work motivates actively looking for them, to further advance our understanding of earthquake physics and ultimately mitigate seismic risk.”
MIT community members elected to the National Academy of Engineering for 2026Seven faculty members, along with 12 additional alumni, are honored for significant contributions to engineering research, practice, and education.Seven MIT researchers are among the 130 new members and 28 international members recently elected to the National Academy of Engineering (NAE) for 2026. Twelve additional MIT alumni were also elected as new members.
One of the highest professional distinctions for engineers, membership in the NAE is given to individuals who have made outstanding contributions to “engineering research, practice, or education,” and to “the pioneering of new and developing fields of technology, making major advancements in traditional fields of engineering, or developing/implementing innovative approaches to engineering education.”
The seven MIT electees this year include:
Moungi Gabriel Bawendi, the Lester Wolfe Professor of Chemistry in the Department of Chemistry, was honored for the synthesis and characterization of semiconductor quantum dots and their applications in displays, photovoltaics, and biology.
Charles Harvey, a professor in the Department of Civil and Environmental Engineering, was honored for contributions to hydrogeology regarding groundwater arsenic contamination, transport, and consequences.
Piotr Indyk, the Thomas D. and Virginia W. Cabot Professor in the Department of Electrical Engineering and Computer Science and a member of the Computer Science and Artificial Intelligence Laboratory, was honored for contributions to approximate nearest neighbor search, streaming, and sketching algorithms for massive data processing.
John Henry Lienhard, the Abdul Latif Jameel Professor of Water and Mechanical Engineering in the Department of Mechanical Engineering, was honored for advances and technological innovations in desalination.
Ram Sasisekharan, the Alfred H. Caspary Professor of Biological Physics and Physics in the Department of Biological Engineering, was honored for discovering the U.S. heparin contaminant in 2008 and creating clinical antibodies for Zika, dengue, SARS-CoV-2, and other diseases.
Frances Ross, the TDK Professor in the Department of Materials Science and Engineering, was honored for ultra-high vacuum and liquid-cell transmission electron microscopies and their worldwide adoptions for materials research and semiconductor technology development.
Zoltán Sandor Spakovszky SM ’99, PhD ’01, the T. Wilson (1953) Professor in Aeronautics in the Department of Aeronautics and Astronautics, was honored for contributions, through rigorous discoveries and advancements, in aeroengine aerodynamic and aerostructural stability and acoustics.
“Each of the MIT faculty and alumni elected to the National Academy of Engineering has made extraordinary contributions to their fields through research, education, and innovation,” says Paula T. Hammond, dean of the School of Engineering and Institute Professor in the Department of Chemical Engineering. "They represent the breadth of excellence we have here at MIT. This honor reflects the impact of their work, and I’m proud to celebrate their achievement and offer my warmest congratulations.”
Twelve additional alumni were elected to the National Academy of Engineering this year. They are: Anne Hammons Aunins PhD ’91; Lars James Blackmore PhD ’07; John-Paul Clarke ’91, SM ’92, SCD ’97; Michael Fardis SM ’77, SM ’78, PhD ’79; David Hays PhD ’98; Stephen Thomas Kent ’76, EE ’78, ENG ’78, PhD ’81; Randal D. Koster SM ’85, SCD ’88; Fred Mannering PhD ’83; Peyman Milanfar SM ’91, EE ’93, ENG ’93, PhD ’93; Amnon Shashua PhD ’93; Michael Paul Thien SCD ’88; and Terry A. Winograd PhD ’70.
AI algorithm enables tracking of vital white matter pathwaysOpening a new window on the brainstem, a new tool reliably and finely resolves distinct nerve bundles in live diffusion MRI scans, revealing signs of injury or disease.The signals that drive many of the brain and body’s most essential functions — consciousness, sleep, breathing, heart rate, and motion — course through bundles of “white matter” fibers in the brainstem, but imaging systems so far have been unable to finely resolve these crucial neural cables. That has left researchers and doctors with little capability to assess how they are affected by trauma or neurodegeneration.
In a new study, a team of MIT, Harvard University, and Massachusetts General Hospital researchers unveil AI-powered software capable of automatically segmenting eight distinct bundles in any diffusion MRI sequence.
In the open-access study, published Feb. 6 in the Proceedings of the National Academy Sciences, the research team led by MIT graduate student Mark Olchanyi reports that their BrainStem Bundle Tool (BSBT), which they’ve made publicly available, revealed distinct patterns of structural changes in patients with Parkinson’s disease, multiple sclerosis, and traumatic brain injury, and shed light on Alzheimer’s disease as well. Moreover, the study shows, BSBT retrospectively enabled tracking of bundle healing in a coma patient that reflected the patient’s seven-month road to recovery.
“The brainstem is a region of the brain that is essentially not explored because it is tough to image,” says Olchanyi, a doctoral candidate in MIT’s Medical Engineering and Medical Physics Program. “People don't really understand its makeup from an imaging perspective. We need to understand what the organization of the white matter is in humans and how this organization breaks down in certain disorders.”
Adds Professor Emery N. Brown, Olchanyi’s thesis supervisor and co-senior author of the study, “the brainstem is one of the body’s most important control centers. Mark’s algorithms are a significant contribution to imaging research and to our ability to the understand regulation of fundamental physiology. By enhancing our capacity to image the brainstem, he offers us new access to vital physiological functions such as control of the respiratory and cardiovascular systems, temperature regulation, how we stay awake during the day and how sleep at night.”
Brown is the Edward Hood Taplin Professor of Computational Neuroscience and Medical Engineering in The Picower Institute for Learning and Memory, the Institute for Medical Engineering and Science, and the Department of Brain and Cognitive Sciences at MIT. He is also an anesthesiologist at MGH and a professor at Harvard Medical School.
Building the algorithm
Diffusion MRI helps trace the long branches, or “axons,” that neurons extend to communicate with each other. Axons are typically clad in a sheath of fat called myelin, and water diffuses along the axons within the myelin, which is also called the brain’s “white matter.” Diffusion MRI can highlight this very directed displacement of water. But segmenting the distinct bundles of axons in the brainstem has proved challenging, because they are small and masked by flows of brain fluids and the motions produced by breathing and heart beats.
As part of his thesis work to better understand the neural mechanisms that underpin consciousness, Olchanyi wanted to develop an AI algorithm to overcome these obstacles. BSBT works by tracing fiber bundles that plunge into the brainstem from neighboring areas higher in the brain, such as the thalamus and the cerebellum, to produce a “probabilistic fiber map.” An artificial intelligence module called a “convolutional neural network” then combines the map with several channels of imaging information from within the brainstem to distinguish eight individual bundles.
To train the neural network to segment the bundles, Olchanyi “showed” it 30 live diffusion MRI scans from volunteers in the Human Connectome Project (HCP). The scans were manually annotated to teach the neural network how to identify the bundles. Then he validated BSBT by testing its output against “ground truth” dissections of post-mortem human brains where the bundles were well delineated via microscopic inspection or very slow but ultra-high-resolution imaging. After training, BSBT became proficient in automatically identifying the eight distinct fiber bundles in new scans.
In an experiment to test its consistency and reliability, Olchanyi tasked BSBT with finding the bundles in 40 volunteers who underwent separate scans two months apart. In each case, the tool was able to find the same bundles in the same patients in each of their two scans. Olchanyi also tested BSBT with multiple datasets (not just the HCP), and even inspected how each component of the neural network contributed to BSBT’s analysis by hobbling them one by one.
“We put the neural network through the wringer,” Olchanyi says. “We wanted to make sure that it’s actually doing these plausible segmentations and it is leveraging each of its individual components in a way that improves the accuracy.”
Potential novel biomarkers
Once the algorithm was properly trained and validated, the research team moved on to testing whether the ability to segment distinct fiber bundles in diffusion MRI scans could enable tracking of how each bundle’s volume and structure varied with disease or injury, creating a novel kind of biomarker. Although the brainstem has been difficult to examine in detail, many studies show that neurodegenerative diseases affect the brainstem, often early on in their progression.
Olchanyi, Brown and their co-authors applied BSBT to scores of datasets of diffusion MRI scans from patients with Alzheimer’s, Parkinson’s, MS, and traumatic brain injury (TBI). Patients were compared to controls and sometimes to themselves over time. In the scans, the tool measured bundle volume and “fractional anisotropy,” (FA) which tracks how much water is flowing along the myelinated axons versus how much is diffusing in other directions, a proxy for white matter structural integrity.
In each condition, the tool found consistent patterns of changes in the bundles. While only one bundle showed significant decline in Alzheimer’s, in Parkinson’s the tool revealed a reduction in FA in three of the eight bundles. It also revealed volume loss in another bundle in patients between a baseline scan and a two-year follow-up. Patients with MS showed their greatest FA reductions in four bundles and volume loss in three. Meanwhile, TBI patients didn’t show significant volume loss in any bundles, but FA reductions were apparent in the majority of bundles.
Testing in the study showed that BSBT proved more accurate than other classifier methods in discriminating between patients with health conditions versus controls.
BSBT, therefore, can be “a key adjunct that aids current diagnostic imaging methods by providing a fine-grained assessment of brainstem white matter structure and, in some cases, longitudinal information,” the authors wrote.
Finally, in the case of a 29-year-old man who suffered a severe TBI, Olchanyi applied BSBT to a scans taken during the man’s seven-month coma. The tool showed that the man’s brainstem bundles had been displaced, but not cut, and showed that over his coma, the lesions on the nerve bundles decreased by a factor of three in volume. As they healed, the bundles moved back into place as well.
The authors wrote that BSBT “has substantial prognostic potential by identifying preserved brainstem bundles that can facilitate coma recovery.”
The study’s other senior authors are Juan Eugenio Iglesias and Brian Edlow. Other co-authors are David Schreier, Jian Li, Chiara Maffei, Annabel Sorby-Adams, Hannah Kinney, Brian Healy, Holly Freeman, Jared Shless, Christophe Destrieux, and Hendry Tregidgo.
Funding for the study came from the National Institutes of Health, U.S. Department of Defense, James S. McDonnell Foundation, Rappaport Foundation, American SidS Institute, American Brain Foundation, American Academy of Neurology, Center for Integration of Medicine and Innovative Technology, Blueprint for Neuroscience Research, and Massachusetts Life Sciences Center.
Some early life forms may have breathed oxygen well before it filled the atmosphereA new study suggests aerobic respiration began hundreds of millions of years earlier than previously thought.Oxygen is a vital and constant presence on Earth today. But that hasn’t always been the case. It wasn’t until around 2.3 billion years ago that oxygen became a permanent fixture in the atmosphere, during a pivotal period known as the Great Oxidation Event (GOE), which set the evolutionary course for oxygen-breathing life as we know it today.
A new study by MIT researchers suggests some early forms of life may have evolved the ability to use oxygen hundreds of millions of years before the GOE. The findings may represent some of the earliest evidence of aerobic respiration on Earth.
In a study appearing today in the journal Palaeogeography, Palaeoclimatology, Palaeoecology, MIT geobiologists traced the evolutionary origins of a key enzyme that enables organisms to use oxygen. The enzyme is found in the vast majority of aerobic, oxygen-breathing life forms today. The team discovered that this enzyme evolved during the Mesoarchean — a geological period that predates the Great Oxidation Event by hundreds of millions of years.
The team’s results may help to explain a longstanding puzzle in Earth’s history: Why did it take so long for oxygen to build up in the atmosphere?
The very first producers of oxygen on the planet were cyanobacteria — microbes that evolved the ability to use sunlight and water to photosynthesize, releasing oxygen as a byproduct. Scientists have determined that cyanobacteria emerged around 2.9 billion years ago. The microbes, then, were presumably churning out oxygen for hundreds of millions of years before the Great Oxidation Event. So, where did all of cyanobacteria’s early oxygen go?
Scientists suspect that rocks may have drawn down a large portion of oxygen early on, through various geochemical reactions. The MIT team’s new study now suggests that biology may have also played a role.
The researchers found that some organisms may have evolved the enzyme to use oxygen hundreds of millions of years before the Great Oxidation Event. This enzyme may have enabled the organisms living near cyanobacteria to gobble up any small amounts of oxygen that the microbes produced, in turn delaying oxygen’s accumulation in the atmosphere for hundreds of millions of years.
“This does dramatically change the story of aerobic respiration,” says study co-author Fatima Husain, a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “Our study adds to this very recently emerging story that life may have used oxygen much earlier than previously thought. It shows us how incredibly innovative life is at all periods in Earth’s history.”
The study’s other co-authors include Gregory Fournier, associate professor of geobiology at MIT, along with Haitao Shang and Stilianos Louca of the University of Oregon.
First respirers
The new study adds to a long line of work at MIT aiming to piece together oxygen’s history on Earth. This body of research has helped to pin down the timing of the Great Oxidation Event as well as the first evidence of oxygen-producing cyanobacteria. The overall understanding that has emerged is that oxygen was first produced by cyanobacteria around 2.9 billion years ago, while the Great Oxidation Event — when oxygen finally accumulated enough to persist in the atmosphere — took place much later, around 2.33 billion years ago.
For Husain and her colleagues, this apparent delay between oxygen’s first production and its eventual persistence inspired a question.
“We know that the microorganisms that produce oxygen were around well before the Great Oxidation Event,” Husain says. “So it was natural to ask, was there any life around at that time that could have been capable of using that oxygen for aerobic respiration?”
If there were in fact some life forms that were using oxygen, even in small amounts, they might have played a role in keeping oxygen from building up in the atmosphere, at least for a while.
To investigate this possibility, the MIT team looked to heme-copper oxygen reductases, which are a set of enzymes that are essential for aerobic respiration. The enzymes act to reduce oxygen to water, and they are found in the majority of aerobic, oxygen-breathing organism today, from bacteria to humans.
“We targeted the core of this enzyme for our analyses because that’s where the reaction with oxygen is actually taking place,” Husain explains.
Tree dates
The team aimed to trace the enzyme’s evolution backward in time to see when the enzyme first emerged to enable organisms to use oxygen. They first identified the enzyme’s genetic sequence and then used an automated search tool to look for this same sequence in databases containing the genomes of millions of different species of organisms.
“The hardest part of this work was that we had too much data,” Fournier says. “This enzyme is just everywhere and is present in most modern living organism. So we had to sample and filter the data down to a dataset that was representative of the diversity of modern life and also small enough to do computation with, which is not trivial.”
The team ultimately isolated the enzyme’s sequence from several thousand modern species and mapped these sequences onto an evolutionary tree of life, based on what scientists know about when each respective species has likely evolved and branched off. They then looked through this tree for specific species that might offer related information about their origins.
If, for instance, there is a fossil record for a particular organism on the tree, that record would include an estimate of when that organism appeared on Earth. The team would use that fossil’s age to “pin” a date to that organism on the tree. In a similar way, they could place pins across the tree to effectively tighten their estimates for when in time the enzyme evolved from one species to the next.
In the end, the researchers were able to trace the enzyme as far back as the Mesoarchean — a geological era that lasted from 3.2 to 2.8 billion years ago. It’s around this time that the team suspects the enzyme — and organisms’ ability to use oxygen — first emerged. This period predates the Great Oxidation Event by several hundred million years.
The new findings suggest that, shortly after cyanobacteria evolved the ability to produce oxygen, other living things evolved the enzyme to use that oxygen. Any such organism that happened to live near cyanobacteria would have been able to quickly take up the oxygen that the bacteria churned out. These early aerobic organisms may have then played some role in preventing oxygen from escaping to the atmosphere, delaying its accumulation for hundreds of millions of years.
“Considered all together, MIT research has filled in the gaps in our knowledge of how Earth’s oxygenation proceeded,” Husain says. “The puzzle pieces are fitting together and really underscore how life was able to diversify and live in this new, oxygenated world.”
This research was supported, in part, by the Research Corporation for Science Advancement Scialog program.
A satellite language network in the brainResearchers find a component of the brain’s dedicated language network in the cerebellum, a region better known for coordinating movement.The ability to use language to communicate is one of things that makes us human. At MIT’s McGovern Institute for Brain Research, scientists led by Evelina Fedorenko have defined an entire network of areas within the brain dedicated to this ability, which work together when we speak, listen, read, write, or sign.
Much of the language network lies within the brain’s neocortex, where many of our most sophisticated cognitive functions are carried out. Now, Fedorenko’s lab, which is part of MIT's Department of Brain and Cognitive Sciences, has identified language-processing regions within the cerebellum, extending the language network to a part of the brain better known for helping to coordinate the body’s movements. Their findings are reported Jan. 21 in the journal Neuron.
“It’s like there’s this region in the cerebellum that we’ve been forgetting about for a long time,” says Colton Casto, a graduate student at Harvard and MIT who works in Fedorenko’s lab. “If you’re a language researcher, you should be paying attention to the cerebellum.”
Imaging the language network
There have been hints that the cerebellum makes important contributions to language. Some functional imaging studies detected activity in this area during language use, and people who suffer damage to the cerebellum sometimes experience language impairments. But no one had been able to pin down exactly which parts of the cerebellum were involved, or tease out their roles in language processing.
To get some answers, Fedorenko’s lab took a systematic approach, using methods they have used to map the language network in the neocortex. For 15 years, the lab has captured functional brain imaging data as volunteers carried out various tasks inside an MRI scanner. By monitoring brain activity as people engaged in different kinds of language tasks, like reading sentences or listening to spoken words, as well as non-linguistic tasks, like listening to noise or memorizing spatial patterns, the team has been able identify parts of the brain that are exclusively dedicated to language processing.
Their work shows that everyone’s language network uses the same neocortical regions. The precise anatomical location of these regions varies, however, so to study the language network in any individual, Fedorenko and her team must map that person’s network inside an MRI scanner using their language-localizer tasks.
Satellite language network
While the Fedorenko lab has largely focused on how the neocortex contributes to language processing, their brain scans also capture activity in the cerebellum. So Casto revisited those scans, analyzing cerebellar activity from more than 800 people to look for regions involved in language processing. Fedorenko points out that teasing out the individual anatomy of the language network turned out to particularly vital in the cerebellum, where neurons are densely packed and areas with different functional specializations sit very close to one another. Ultimately, Casto was able to identify four cerebellar areas that consistently got involved during language use.
Three of these regions were clearly involved in language use, but also reliably became engaged during certain kinds of non-linguistic tasks. Casto says this was a surprise, because all the core language areas in the neocortex are dedicated exclusively to language processing. The researchers speculate that the cerebellum may be integrating information from different parts of the cortex — a function that could be important for many cognitive tasks.
“We’ve found that language is distinct from many, many other things — but at some point, complex cognition requires everything to work together,” Fedorenko says. “How do these different kinds of information get connected? Maybe parts of the cerebellum serve that function.”
The researchers also found a spot in the right posterior cerebellum with activity patterns that more closely echoed those of the language network in the neocortex. This region stayed silent during non-linguistic tasks, but became active during language use. For all of the linguistic activities that Casto analyzed, this region exhibited patterns of activity that were very similar to what the lab has seen in neocortical components of the language network. “Its contribution to language seems pretty similar,” Casto says. The team describes this area as a “cerebellar satellite” of the language network.
Still, the researchers think it’s unlikely that neurons in the cerebellum, which are organized very differently than those in the neocortex, replicate the precise function of other parts of the language network. Fedorenko’s team plans to explore the function of this satellite region more deeply, investigating whether it may participate in different kinds of tasks.
The researchers are also exploring the possibility that the cerebellum is particularly important for language learning — playing an outsized role during development, or when people learn languages later in life.
Fedorenko says the discovery may also have implications for treating language impairments caused when an injury or disease damages the brain’s neocortical language network. “This area may provide a very interesting potential target to help recovery from aphasia,” Fedorenko says.
Currently, researchers are exploring the possibility that non-invasively stimulating language-associated parts of the brain might promote language recovery. “This right cerebellar region may be just the right thing to potentially stimulate to up-regulate some of that function that’s lost,” Fedorenko says.
Terahertz microscope reveals the motion of superconducting electronsFor the first time, the new scope allowed physicists to observe terahertz “jiggles” in a superconducting fluid.You can tell a lot about a material based on the type of light you shine at it: Optical light illuminates a material’s surface, while X-rays reveal its internal structures and infrared captures a material’s radiating heat.
Now, MIT physicists have used terahertz light to reveal inherent, quantum vibrations in a superconducting material, which have not been observable until now.
Terahertz light is a form of energy that lies between microwaves and infrared radiation on the electromagnetic spectrum. It oscillates over a trillion times per second — just the right pace to match how atoms and electrons naturally vibrate inside materials. Ideally, this makes terahertz light the perfect tool to probe these motions.
But while the frequency is right, the wavelength — the distance over which the wave repeats in space — is not. Terahertz waves have wavelengths hundreds of microns long. Because the smallest spot that any kind of light can be focused into is limited by its wavelength, terahertz beams cannot be tightly confined. As a result, a focused terahertz beam is physically too large to interact effectively with microscopic samples, simply washing over these tiny structures without revealing fine detail.
In a paper appearing today in the journal Nature, the scientists report that they have developed a new terahertz microscope that compresses terahertz light down to microscopic dimensions. This pinpoint of terahertz light can resolve quantum details in materials that were previously inaccessible.
The team used the new microscope to send terahertz light into a sample of bismuth strontium calcium copper oxide, or BSCCO (pronounced “BIS-co”) — a material that superconducts at relatively high temperatures. With the terahertz scope, the team observed a frictionless “superfluid” of superconducting electrons that were collectively jiggling back and forth at terahertz frequencies within the BSCCO material.
“This new microscope now allows us to see a new mode of superconducting electrons that nobody has ever seen before,” says Nuh Gedik, the Donner Professor of Physics at MIT.
By using terahertz light to probe BSCCO and other superconductors, scientists can gain a better understanding of properties that could lead to long-coveted room-temperature superconductors. The new microscope can also help to identify materials that emit and receive terahertz radiation. Such materials could be the foundation of future wireless, terahertz-based communications, that could potentially transmit more data at faster rates compared to today’s microwave-based communications.
“There’s a huge push to take Wi-Fi or telecommunications to the next level, to terahertz frequencies,” says Alexander von Hoegen, a postdoc in MIT’s Materials Research Laboratory and lead author of the study. “If you have a terahertz microscope, you could study how terahertz light interacts with microscopically small devices that could serve as future antennas or receivers.”
In addition to Gedik and von Hoegen, the study’s MIT co-authors include Tommy Tai, Clifford Allington, Matthew Yeung, Jacob Pettine, Alexander Kossak, Byunghun Lee, and Geoffrey Beach, along with collaborators at Harvard University, the Max Planck Institute for the Structure and Dynamics of Matter, the Max Planck Institute for the Physics of Complex Systems and the Brookhaven National Lab.
Hitting a limit
Terahertz light is a promising yet largely untapped imaging tool. It occupies a unique spectral “sweet spot”: Like microwaves, radio waves, and visible light, terahertz radiation is nonionizing and therefore does not carry enough energy to cause harmful radiation effects, making it safe for use in humans and biological tissues. At the same time, much like X-rays, terahertz waves can penetrate a wide range of materials, including fabric, wood, cardboard, plastic, ceramics, and even thin brick walls.
Owing to these distinctive properties, terahertz light is being actively explored for applications in security screening, medical imaging, and wireless communications. In contrast, far less effort has been devoted to applying terahertz radiation to microscopy and the illumination of microscopic phenomena. The primary reason is a fundamental limitation shared by all forms of light: the diffraction limit, which restricts spatial resolution to roughly the wavelength of the radiation used.
With wavelengths on the order of hundreds of microns, terahertz radiation is far larger than atoms, molecules, and many other microscopic structures. As a result, its ability to directly resolve microscale features is fundamentally constrained.
“Our main motivation is this problem that, you might have a 10-micron sample, but your terahertz light has a 100-micron wavelength, so what you would mostly be measuring is air, or the vacuum around your sample,” von Hoegen explains. “You would be missing all these quantum phases that have characteristic fingerprints in the terahertz regime.”
Zooming in
The team found a way around the terahertz diffraction limit by using spintronic emitters — a recent technology that produces sharp pulses of terahertz light. Spintronic emitters are made from multiple ultrathin metallic layers. When a laser illuminates the multilayered structure, the light triggers a cascade of effects in the electrons within each layer, such that the structure ultimately emits a pulse of energy at terahertz frequencies.
By holding a sample close to the emitter, the team trapped the terahertz light before it had a chance to spread, essentially squeezing it into a space much smaller than its wavelength. In this regime, the light can bypass the diffraction limit to resolve features that were previously too small to see.
The MIT team adapted this technology to observe microscopic, quantum-scale phenomena. For their new study, the team developed a terahertz microscope using spintronic emitters interfaced with a Bragg mirror. This multilayered structure of reflective films successively filters out certain, undesired wavelengths of light while letting through others, protecting the sample from the “harmful” laser which triggers the terahertz emission.
As a demonstration, the team used the new microscope to image a small, atomically thin sample of BSCCO. They placed the sample very close to the terahertz source and imaged it at temperatures close to absolute zero — cold enough for the material to become a superconductor. To create the image, they scanned the laser beam, sending terahertz light through the sample and looking for the specific signatures left by the superconducting electrons.
“We see the terahertz field gets dramatically distorted, with little oscillations following the main pulse,” von Hoegen says. “That tells us that something in the sample is emitting terahertz light, after it got kicked by our initial terahertz pulse.”
With further analysis, the team concluded that the terahertz microscope was observing the natural, collective terahertz oscillations of superconducting electrons within the material.
“It’s this superconducting gel that we’re sort of seeing jiggle,” von Hoegen says.
This jiggling superfluid was expected, but never directly visualized until now. The team is now applying the microscope to other two-dimensional materials, where they hope to capture more terahertz phenomena.
“There are a lot of the fundamental excitations, like lattice vibrations and magnetic processes, and all these collective modes that happen at terahertz frequencies,” von Hoegen says. “We can now resonantly zoom in on these interesting physics with our terahertz microscope.”
This research was supported, in part, by the MIT Research Laboratory of Electronics, the U.S. Department of Energy, and the Gordon and Betty Moore Foundation. Fabrication was carried out with the use of MIT.nano.
MIT engineers design structures that compute with heatBy leveraging excess heat instead of electricity, microscopic silicon structures could enable more energy-efficient thermal sensing and signal processing.MIT researchers have designed silicon structures that can perform calculations in an electronic device using excess heat instead of electricity. These tiny structures could someday enable more energy-efficient computation.
In this computing method, input data are encoded as a set of temperatures using the waste heat already present in a device. The flow and distribution of heat through a specially designed material forms the basis of the calculation. Then the output is represented by the power collected at the other end, which is thermostat at a fixed temperature.
The researchers used these structures to perform matrix vector multiplication with more than 99 percent accuracy. Matrix multiplication is the fundamental mathematical technique machine-learning models like LLMs utilize to process information and make predictions.
While the researchers still have to overcome many challenges to scale up this computing method for modern deep-learning models, the technique could be applied to detect heat sources and measure temperature changes in electronics without consuming extra energy. This would also eliminate the need for multiple temperature sensors that take up space on a chip.
“Most of the time, when you are performing computations in an electronic device, heat is the waste product. You often want to get rid of as much heat as you can. But here, we’ve taken the opposite approach by using heat as a form of information itself and showing that computing with heat is possible,” says Caio Silva, an undergraduate student in the Department of Physics and lead author of a paper on the new computing paradigm.
Silva is joined on the paper by senior author Giuseppe Romano, a research scientist at MIT’s Institute for Soldier Nanotechnologies. The research appears today in Physical Review Applied.
Turning up the heat
This work was enabled by a software system the researchers previously developed that allows them to automatically design a material that can conduct heat in a specific manner.
Using a technique called inverse design, this system flips the traditional engineering approach on its head. The researchers define the functionality they want first, then the system uses powerful algorithms to iteratively design the best geometry for the task.
They used this system to design complex silicon structures, each roughly the same size as a dust particle, that can perform computations using heat conduction. This is a form of analog computing, in which data are encoded and signals are processed using continuous values, rather than digital bits that are either 0s or 1s.
The researchers feed their software system the specifications of a matrix of numbers that represents a particular calculation. Using a grid, the system designs a set of rectangular silicon structures filled with tiny pores. The system continually adjusts each pixel in the grid until it arrives at the desired mathematical function.
Heat diffuses through the silicon in a way that performs the matrix multiplication, with the geometry of the structure encoding the coefficients.

“These structures are far too complicated for us to come up with just through our own intuition. We need to teach a computer to design them for us. That is what makes inverse design a very powerful technique,” Romano says.
But the researchers ran into a problem. Due to the laws of heat conduction, which impose that heat goes from hot to cold regions, these structures can only encode positive coefficients.
They overcame this problem by splitting the target matrix into its positive and negative components and representing them with separately optimized silicon structures that encode positive entries. Subtracting the outputs at a later stage allows them to compute negative matrix values.
They can also tune the thickness of the structures, which allows them to realize a greater variety of matrices. Thicker structures have greater heat conduction.
“Finding the right topology for a given matrix is challenging. We beat this problem by developing an optimization algorithm that ensures the topology being developed is as close as possible to the desired matrix without having any weird parts,” Silva explains.
Microelectronic applications
The researchers used simulations to test the structures on simple matrices with two or three columns. While simple, these small matrices are relevant for important applications, such as fusion sensing and diagnostics in microelectronics.
The structures performed computations with more than 99 percent accuracy in many cases.
However, there is still a long way to go before this technique could be used for large-scale applications such as deep learning, since millions of structures would need to be tiled together. As the matrices become more complicated, the structures become less accurate, especially when there is a large distance between the input and output terminals. In addition, the devices have limited bandwidth, which would need to be greatly expanded if they were to be used for deep learning.
But because the structures rely on excess heat, they could be directly applied for tasks like thermal management, as well as heat source or temperature gradient detection in microelectronics.
“This information is critical. Temperature gradients can cause thermal expansion and damage a circuit or even cause an entire device to fail. If we have a localized heat source where we don’t want a heat source, it means we have a problem. We could directly detect such heat sources with these structures, and we can just plug them in without needing any digital components,” Romano says.
Building on this proof-of-concept, the researchers want to design structures that can perform sequential operations, where the output of one structure becomes an input for the next. This is how machine-learning models perform computations. They also plan to develop programmable structures, enabling them to encode different matrices without starting from scratch with a new structure each time.