Science news from the MIT - Massachusetts Institute of Technology University

Here you find the recent daily science news of the the MIT - Massachusetts Institute of Technology University

MIT News - School of Science
MIT news feed about: School of Science
Laurent Demanet appointed co-director of MIT Center for Computational Science and Engineering

Applied mathematics professor will join fellow co-director Nicolas Hadjiconstantinou in leading the cross-cutting center.


Laurent Demanet, MIT professor of applied mathematics, has been appointed co-director of the MIT Center for Computational Science and Engineering (CCSE), effective Sept. 1.

Demanet, who holds a joint appointment in the departments of Mathematics and Earth, Atmospheric and Planetary Sciences — where he previously served as director of the Earth Resources Laboratory — succeeds Youssef Marzouk, who is now serving as the associate dean of the MIT Schwarzman College of Computing.

Joining co-director Nicolas Hadjiconstantinou, the Quentin Berg (1937) Professor of Mechanical Engineering, Demanet will help lead CCSE, supporting students, faculty, and researchers while fostering a vibrant community of innovation and discovery in computational science and engineering (CSE).

“Laurent’s ability to translate concepts of computational science and engineering into understandable, real-world applications is an invaluable asset to CCSE. His interdisciplinary experience is a benefit to the visibility and impact of CSE research and education. I look forward to working with him,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science.

“I’m pleased to welcome Laurent into his new role as co-director of CCSE. His work greatly supports the cross-cutting methodology at the heart of the computational science and engineering community. I’m excited for CCSE to have a co-director from the School of Science, and eager to see the center continue to broaden its connections across MIT,” says Asu Ozdaglar, deputy dean of the MIT Schwarzman College of Computing, department head of Electrical Engineering and Computer Science, and MathWorks Professor.

Established in 2008, CCSE was incorporated into the MIT Schwarzman College of Computing as one of its core academic units in January 2020. An interdisciplinary research and education center dedicated to pioneering applications of computation, CCSE houses faculty, researchers, and students from a range of MIT schools, such as the schools of Engineering, Science, Architecture and Planning, and the MIT Sloan School of Management, as well as other units of the college.

“I look forward to working with Nicolas and the college leadership on raising the profile of CCSE on campus and globally. We will be pursuing a set of initiatives that span from enhancing the visibility of our research and strengthening our CSE PhD program, to expanding professional education offerings and deepening engagement with our alumni and with industry,” says Demanet.

Demanet’s research lies at the intersection of applied mathematics and scientific computing to visualize the structures beneath Earth’s surface. He also has a strong interest in scientific computing, machine learning, inverse problems, and wave propagation. Through his position as principal investigator of the Imaging and Computing Group, Demanet and his students aim to answer fundamental questions in computational seismic imaging to increase the quality and accuracy of mapping and the projection of changes in Earth’s geological structures. The implications of his work are rooted in environmental monitoring, water resources and geothermal energy, and the understanding of seismic hazards, among others.

He joined the MIT faculty in 2009. He received an Alfred P. Sloan Research Fellowship and the U.S. Air Force Young Investigator Award in 2011, and a CAREER award from the National Science Foundation in 2012. He also held the Class of 1954 Career Development Professorship from 2013 to 2016. Prior to coming to MIT, Demanet held the Szegö Assistant Professorship at Stanford University. He completed his undergraduate studies in mathematical engineering and theoretical physics at Universite de Louvain in Belgium, and earned a PhD in applied and computational mathematics at Caltech, where he was awarded the William P. Carey Prize for best dissertation in the mathematical sciences.


Study sheds light on musicians’ enhanced attention

Brain imaging suggests people with musical training may be better than others at filtering out distracting sounds.


In a world full of competing sounds, we often have to filter out a lot of noise to hear what’s most important. This critical skill may come more easily for people with musical training, according to scientists at MIT’s McGovern Institute for Brain Research, who used brain imaging to follow what happens when people try to focus their attention on certain sounds.

When Cassia Low Manting, a recent MIT postdoc working in the labs of MIT Professor and McGovern Institute PI John Gabrieli and former McGovern Institute PI Dimitrios Pantazis, asked people to focus on a particular melody while another melody played at the same time, individuals with musical backgrounds were, unsurprisingly, better able to follow the target tune. An analysis of study participants’ brain activity suggests this advantage arises because musical training sharpens neural mechanisms that amplify the sounds they want to listen to while turning down distractions. 

“People can hear, understand, and prioritize multiple sounds around them that flow on a moment-to-moment basis,” explains Gabrieli, who is the Grover Hermann Professor of Health Sciences and Technology at MIT. “This study reveals the specific brain mechanisms that successfully process simultaneous sounds on a moment-to-moment basis and promote attention to the most important sounds. It also shows how musical training alters that processing in the mind and brain, offering insight into how experience shapes the way we listen and pay attention.”

The research team, which also included senior author Daniel Lundqvist at the Karolinska Institute in Sweden, reported their open-access findings Sept. 17 in the journal Science Advances. Manting, who is now at the Karolinska Institute, notes that the research is part of an ongoing collaboration between the two institutions.

Overcoming challenges

Participants in the study had vastly difference backgrounds when it came to music. Some were professional musicians with deep training and experience, while others struggled to differentiate between the two tunes they were played, despite each one’s distinct pitch. This disparity allowed the researchers to explore how the brain’s capacity for attention might change with experience. “Musicians are very fun to study because their brains have been morphed in ways based on their training,” Manting says. “It’s a nice model to study these training effects.”

Still, the researchers had significant challenges to overcome. It has been hard to study how the brain manages auditory attention, because when researchers use neuroimaging to monitor brain activity, they see the brain’s response to all sounds: those that the listener cares most about, as well as those the listener is trying to ignore. It is usually difficult to figure out which brain signals were triggered by which sounds.

Manting and her colleagues overcame this challenge with a method called frequency tagging. Rather than playing the melodies in their experiments at a constant volume, the volume of each melody oscillated, rising and falling with a particular frequency. Each melody had its own frequency, creating detectable patterns in the brain signals that responded to it. “When you play these two sounds simultaneously to the subject and you record the brain signal, you can say, this 39-Hertz activity corresponds to the lower-pitch sound and the 43-Hertz activity corresponds specifically to the higher-pitch sound,” Manting explains. “It is very clean and very clear.”

When they paired frequency tagging with magnetoencephalography, a noninvasive method of monitoring brain activity, the team was able to track how their study participants’ brains responded to each of two melodies during their experiments. While the two tunes played, subjects were instructed to follow either the higher-pitched or the lower-pitched melody. When the music stopped, they were asked about the final notes of the target tune: did they rise or did they fall? The researchers could make this task harder by making the two tunes closer together in pitch, as well as by altering the timing of the notes.

Manting used a survey that asked about musical experience to score each participant’s musicality, and this measure had an obvious effect on task performance: The more musical a person was, the more successful they were at following the tune they had been asked to track.

To look for differences in brain activity that might explain this, the research team developed a new machine-learning approach to analyze their data. They used it to tease apart what was happening in the brain as participants focused on the target tune — even, in some cases, when the notes of the distracting tune played at the exact same time.

Top-down versus bottom-up attention

What they found was a clear separation of brain activity associated with two kinds of attention, known as top-down and bottom-up attention. Manting explains that top-down attention is goal-oriented, involving a conscious focus — the kind of attention listeners called on as they followed the target tune. Bottom-up attention, on the other hand, is triggered by the nature of the sound itself. A fire alarm would be expected to trigger this kind of attention, both with its volume and its suddenness. The distracting tune in the team’s experiments triggered activity associated with bottom-up attention — but more so in some people than in others.

“The more musical someone is, the better they are at focusing their top-down selective attention, and the less the effect of bottom-up attention is,” Manting explains.

Manting expects that musicians use their heightened capacity for top-down attention in other situations, as well. For example, they might be better than others at following a conversation in a room filled with background chatter. “I would put my bet on it that there is a high chance that they will be great at zooming into sounds,” she says.

She wonders, however, if one kind of distraction might actually be harder for a musician to filter out: the sound of their own instrument. Manting herself plays both the piano and the Chinese harp, and she says hearing those instruments is “like someone calling my name.” It’s one of many questions about how musical training affects cognition that she plans to explore in her future work.


Matthew Shoulders named head of the Department of Chemistry

A leading researcher in protein folding biochemistry and next-generation protein engineering techniques will advance chemistry research and education.


Matthew D. Shoulders, the Class of 1942 Professor of Chemistry, a MacVicar Faculty Fellow, and an associate member of the Broad Institute of MIT and Harvard, has been named head of the MIT Department of Chemistry, effective Jan. 16, 2026. 

“Matt has made pioneering contributions to the chemistry research community through his research on mechanisms of proteostasis and his development of next-generation techniques to address challenges in biomedicine and agriculture,” says Nergis Mavalvala, dean of the MIT School of Science and the Curtis and Kathleen Marble Professor of Astrophysics. “He is also a dedicated educator, beloved by undergraduates and graduates alike. I know the department will be in good hands as we double down on our commitment to world-leading research and education in the face of financial headwinds.”

Shoulders succeeds Troy Van Voorhis, the Robert T. Haslam and Bradley Dewey Professor of Chemistry, who has been at the helm since October 2019.

“I am tremendously grateful to Troy for his leadership the past six years, building a fantastic community here in our department. We face challenges, but also many exciting opportunities, as a department in the years to come,” says Shoulders. “One thing is certain: Chemistry innovations are critical to solving pressing global challenges. Through the research that we do and the scientists we train, our department has a huge role to play in shaping the future.”

Shoulders studies how cells fold proteins, and he develops ​and applies novel protein engineering techniques to challenges in biotechnology. His work across chemistry and biochemistry fields including proteostasis, extracellular matrix biology, virology, evolution, and synthetic biology is yielding not just important insights into topics like how cells build healthy tissues and how proteins evolve, but also influencing approaches to disease therapy and biotechnology development.

“Matt is an outstanding researcher whose work touches on fundamental questions about how the cell machinery directs the synthesis and folding of proteins. His discoveries about how that machinery breaks down as a result of mutations or in response to stress has a fundamental impact on how we think about and treat human diseases,” says Van Voorhis.

In one part of Matt's current research program, he is studying how protein folding systems in cells — known as chaperones — shape the evolution of their clients. Amongst other discoveries, his lab has shown that viral pathogens hijack human chaperones to enable their rapid evolution and escape from host immunity. In related recent work, they have discovered that these same chaperones can promote access to malignancy-driving mutations in tumors. Beyond fundamental insights into evolutionary biology, these findings hold potential to open new therapeutic strategies to target cancer and viral infections.

“Matt’s ability to see both the details and the big picture makes him an outstanding researcher and a natural leader for the department,” says Timothy Swager, the John D. MacArthur Professor of Chemistry. “MIT Chemistry can only benefit from his dedication to understanding and addressing the parts and the whole.” 

Shoulders also leads a food security project through the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS). Shoulders, along with MIT Research Scientist Robbie Wilson, assembled an interdisciplinary team based at MIT to enhance climate resilience in agriculture by improving one of the most inefficient aspects of photosynthesis, the carbon dioxide-fixing plant enzyme RuBisCO. J-WAFS funded this high-risk, high-reward MIT Grand Challenge project in 2023, and it has received further support from federal research agencies and the Grantham Foundation for the Protection of the Environment. 

“Our collaborative team of biochemists and synthetic biologists, computational biologists, and chemists is deeply integrated with plant biologists, creating a robust feedback loop for enzyme engineering,” Shoulders says. “Together, this team is making a concerted effort using state-of-the-art techniques to engineer crop RuBisCO with an eye to helping make meaningful gains in securing a stable crop supply, hopefully with accompanying improvements in both food and water security.”

In addition to his research contributions, Shoulders has taught multiple classes for Course V, including 5.54 (Advances in Chemical Biology) and 5.111 (Principles of Chemical Science), along with a number of other key chemistry classes. His contributions to a 5.111 “bootcamp” through the MITx platform served to address gaps in the classroom curriculum by providing online tools to help undergraduate students better grasp the material in the chemistry General Institute Requirement (GIR). His development of Guided Learning Demonstrations to support first-year chemistry courses at MIT has helped bring the lab to the GIR, and also contributed to the popularity of 5.111 courses offered regularly via MITx.

“I have had the pleasure of teaching with Matt on several occasions, and he is a fantastic educator. He is an innovator both inside and outside the classroom and has an unwavering commitment to his students’ success,” says Van Voorhis of Shoulders, who was named a 2022 MacVicar Faculty Fellow, and who received a Committed to Caring award through the Office of Graduate Education.

Shoulders also founded the MIT Homeschool Internship Program for Science and Technology, which brings high school students to campus for paid summer research experiences in labs across the Institute.

He is a founding member of the Department of Chemistry’s Quality of Life Committee and chair for the last six years, helping to improve all aspects of opportunity, professional development, and experience in the department: “countless changes that have helped make MIT a better place for all,” as Van Voorhis notes, including creating a peer mentoring program for graduate students and establishing universal graduate student exit interviews to collect data for department-wide assessment and improvement.

At the Institute level, Shoulders has served on the Committee on Graduate Programs, Committee on Sexual Misconduct Prevention and Response (in which he co-chaired the provost's working group on the Faculty and Staff Sexual Misconduct Survey), and the Committee on Assessment of Biohazards and Embryonic Stem Cell Research Oversight, among other roles.

Shoulders graduated summa cum laude from Virginia Tech in 2004, earning a BS in chemistry with a minor in biochemistry. He earned a PhD in chemistry at the University of Wisconsin at Madison in 2009 under Professor Ronald Raines. Following an American Cancer Society Postdoctoral Fellowship at Scripps Research Institute, working with professors Jeffery Kelly and Luke Wiseman, Shoulders joined the MIT Department of Chemistry faculty as an assistant professor in 2012. Shoulders also serves as an associate member of the Broad Institute and an investigator at the Center for Musculoskeletal Research at Massachusetts General Hospital.

Among his many awards, Shoulders has received a NIH Director's New Innovator Award under the NIH High-Risk, High-Reward Research Program; an NSF CAREER Award; an American Cancer Society Research Scholar Award; the Camille Dreyfus Teacher-Scholar Award; and most recently the Ono Pharma Foundation Breakthrough Science Award.


Chemists create red fluorescent dyes that may enable clearer biomedical imaging

The new dyes are based on boron-containing molecules that were previously too unstable for practical use.


MIT chemists have designed a new type of fluorescent molecule that they hope could be used for applications such as generating clearer images of tumors.

The new dye is based on a borenium ion — a positively charged form of boron that can emit light in the red to near-infrared range. Until recently, these ions have been too unstable to be used for imaging or other biomedical applications.

In a study appearing today in Nature Chemistry, the researchers showed that they could stabilize borenium ions by attaching them to a ligand. This approach allowed them to create borenium-containing films, powders, and crystals, all of which emit and absorb light in the red and near-infrared range.

That is important because near-IR light is easier to see when imaging structures deep within tissues, which could allow for clearer images of tumors and other structures in the body.

“One of the reasons why we focus on red to near-IR is because those types of dyes penetrate the body and tissue much better than light in the UV and visible range. Stability and brightness of those red dyes are the challenges that we tried to overcome in this study,” says Robert Gilliard, the Novartis Professor of Chemistry at MIT and the senior author of the study.

MIT research scientist Chun-Lin Deng is the lead author of the paper. Other authors include Bi Youan (Eric) Tra PhD ’25, former visiting graduate student Xibao Zhang, and graduate student Chonghe Zhang.

Stabilized borenium

Most fluorescent imaging relies on dyes that emit blue or green light. Those imaging agents work well in cells, but they are not as useful in tissue because low levels of blue and green fluorescence produced by the body interfere with the signal. Blue and green light also scatters in tissue, limiting how deeply it can penetrate.

Imaging agents that emit red fluorescence can produce clearer images, but most red dyes are inherently unstable and don’t produce a bright signal, because of their low quantum yields (the ratio of fluorescent photons emitted per photon of light is absorbed). For many red dyes, the quantum yield is only about 1 percent.

Among the molecules that can emit near-infrared light are borenium cations —positively charged ions containing an atom of boron attached to three other atoms.

When these molecules were first discovered in the mid-1980s, they were considered “laboratory curiosities,” Gilliard says. These molecules were so unstable that they had to be handled in a sealed container called a glovebox to protect them from exposure to air, which can lead them to break down.

Later, chemists realized they could make these ions more stable by attaching them to molecules called ligands. Working with these more stable ions, Gillliard’s lab discovered in 2019 that they had some unusual properties: Namely, they could respond to changes in temperature by emitting different colors of light.

However, at that point, “there was a substantial problem in that they were still too reactive to be handled in open air,” Gilliard says.

His lab began working on new ways to further stabilize them using ligands known as carbodicarbenes (CDCs), which they reported in a 2022 study. Due to this stabilization, the compounds can now be studied and handled without using a glovebox. They are also resistant to being broken down by light, unlike many previous borenium-based compounds.

In the new study, Gilliard began experimenting with the anions (negatively charged ions) that are a part of the CDC-borenium compounds. Interactions between these anions and the borenium cation generate a phenomenon known as exciton coupling, the researchers discovered. This coupling, they found, shifted the molecules’ emission and absorption properties toward the infrared end of the color spectrum. These molecules also generated a high quantum yield, allowing them to shine more brightly.

“Not only are we in the correct region, but the efficiency of the molecules is also very suitable,” Gilliard says. “We’re up to percentages in the thirties for the quantum yields in the red region, which is considered to be high for that region of the electromagnetic spectrum.”

Potential applications

The researchers also showed that they could convert their borenium-containing compounds into several different states, including solid crystals, films, powders, and colloidal suspensions.

For biomedical imaging, Gilliard envisions that these borenium-containing materials could be encapsulated in polymers, allowing them to be injected into the body to use as an imaging dye. As a first step, his lab plans to work with researchers in the chemistry department at MIT and at the Broad Institute of MIT and Harvard to explore the potential of imaging these materials within cells.

Because of their temperature responsiveness, these materials could also be deployed as temperature sensors, for example, to monitor whether drugs or vaccines have been exposed to temperatures that are too high or low during shipping.

“For any type of application where temperature tracking is important, these types of ‘molecular thermometers’ can be very useful,” Gilliard says.

If incorporated into thin films, these molecules could also be useful as organic light-emitting diodes (OLEDs), particularly in new types of materials such as flexible screens, Gilliard says.

“The very high quantum yields achieved in the near-IR, combined with the excellent environmental stability, make this class of compounds extremely interesting for biological applications,” says Frieder Jaekle, a professor of chemistry at Rutgers University, who was not involved in the study. “Besides the obvious utility in bioimaging, the strong and tunable near-IR emission also makes these new fluorophores very appealing as smart materials for anticounterfeiting, sensors, switches, and advanced optoelectronic devices.”

In addition to exploring possible applications for these dyes, the researchers are now working on extending their color emission further into the near-infrared region, which they hope to achieve by incorporating additional boron atoms. Those extra boron atoms could make the molecules less stable, so the researchers are also working on new types of carbodicarbenes to help stabilize them.

The research was funded by the Arnold and Mabel Beckman Foundation and the National Institutes of Health.


MIT-affiliated physicists win McMillan Award for discovery of exotic electronic state

Jiaqi Cai and Zhengguang Lu independently discovered that electrons can become fractions of themselves.


Last year, MIT physicists reported in the journal Nature that electrons can become fractions of themselves in graphene, an atomically thin form of carbon. This exotic electronic state, called the fractional quantum anomalous Hall effect (FQAHE), could enable more robust forms of quantum computing.

Now two young MIT-affiliated physicists involved in the discovery of FQAHE have been named the 2025 recipients of the McMillan Award from the University of Illinois for their work. Jiaqi Cai and Zhengguang Lu won the award “for the discovery of fractional anomalous quantum hall physics in 2D moiré materials.”

Cai is currently a Pappalardo Fellow at MIT working with Pablo Jarillo-Herrero, the Cecil and Ida Green Professor of Physics, and collaborating with several other labs at MIT including Long Ju, the Lawrence and Sarah W. Biedenharn Career Development Associate Professor in the MIT Department of Physics. He discovered FQAHE while working in the laboratory of Professor Xiaodong Xu at the University of Washington.

Lu discovered FQAHE while working as a postdoc Ju's lab and has since become an assistant professor at Florida State University.

The two independent discoveries were made in the same year.
 
“The McMillan award is the highest honor that a young condensed matter physicist can receive,” says Ju. “My colleagues and I in the Condensed Matter Experiment and the Condensed Matter Theory Group are very proud of Zhengguang and Jiaqi.” 

Ju and Jarillo-Herrero are both also affiliated with the Materials Research Laboratory. 

In addition to a monetary prize and a plaque, Lu and Cai will give a colloquium on their work at the University of Illinois this fall.


A simple formula could guide the design of faster-charging, longer-lasting batteries

MIT researchers developed a model that explains lithium intercalation rates in lithium-ion batteries.


At the heart of all lithium-ion batteries is a simple reaction: Lithium ions dissolved in an electrolyte solution “intercalate” or insert themselves into a solid electrode during battery discharge. When they de-intercalate and return to the electrolyte, the battery charges.

This process happens thousands of times throughout the life of a battery. The amount of power that the battery can generate, and how quickly it can charge, depend on how fast this reaction happens. However, little is known about the exact mechanism of this reaction, or the factors that control its rate.

In a new study, MIT researchers have measured lithium intercalation rates in a variety of different battery materials and used that data to develop a new model of how the reaction is controlled. Their model suggests that lithium intercalation is governed by a process known as coupled ion-electron transfer, in which an electron is transferred to the electrode along with a lithium ion.

Insights gleaned from this model could guide the design of more powerful and faster charging lithium-ion batteries, the researchers say.

“What we hope is enabled by this work is to get the reactions to be faster and more controlled, which can speed up charging and discharging,” says Martin Bazant, the Chevron Professor of Chemical Engineering and a professor of mathematics at MIT.

The new model may also help scientists understand why tweaking electrodes and electrolytes in certain ways leads to increased energy, power, and battery life — a process that has mainly been done by trial and error.

“This is one of these papers where now we began to unify the observations of reaction rates that we see with different materials and interfaces, in one theory of coupled electron and ion transfer for intercalation, building up previous work on reaction rates,” says Yang Shao-Horn, the J.R. East Professor of Engineering at MIT and a professor of mechanical engineering, materials science and engineering, and chemistry.

Shao-Horn and Bazant are the senior authors of the paper, which appears today in Science. The paper’s lead authors are Yirui Zhang PhD ’22, who is now an assistant professor at Rice University; Dimitrios Fraggedakis PhD ’21, who is now an assistant professor at Princeton University; Tao Gao, a former MIT postdoc who is now an assistant professor at the University of Utah; and MIT graduate student Shakul Pathak.

Modeling lithium flow

For many decades, scientists have hypothesized that the rate of lithium intercalation at a lithium-ion battery electrode is determined by how quickly lithium ions can diffuse from the electrolyte into the electrode. This reaction, they believed, was governed by a model known as the Butler-Volmer equation, originally developed almost a century ago to describe the rate of charge transfer during an electrochemical reaction.

However, when researchers have tried to measure lithium intercalation rates, the measurements they obtained were not always consistent with the rates predicted by the Butler-Volmer equation. Furthermore, obtaining consistent measurements across labs has been difficult, with different research teams reporting measurements for the same reaction that varied by a factor of up to 1 billion.

In the new study, the MIT team measured lithium intercalation rates using an electrochemical technique that involves applying repeated, short bursts of voltage to an electrode. They generated these measurements for more than 50 combinations of electrolytes and electrodes, including lithium nickel manganese cobalt oxide, which is commonly used in electric vehicle batteries, and lithium cobalt oxide, which is found in the batteries that power most cell phones, laptops, and other portable electronics.

For these materials, the measured rates are much lower than has previously been reported, and they do not correspond to what would be predicted by the traditional Butler-Volmer model.

The researchers used the data to come up with an alternative theory of how lithium intercalation occurs at the surface of an electrode. This theory is based on the assumption that in order for a lithium ion to enter an electrode, an electron from the electrolyte solution must be transferred to the electrode at the same time.

“The electrochemical step is not lithium insertion, which you might think is the main thing, but it’s actually electron transfer to reduce the solid material that is hosting the lithium,” Bazant says. “Lithium is intercalated at the same time that the electron is transferred, and they facilitate one another.”

This coupled-electron ion transfer (CIET) lowers the energy barrier that must be overcome for the intercalation reaction to occur, making it more likely to happen. The mathematical framework of CIET allowed the researchers to make reaction rate predictions, which were validated by their experiments and substantially different from those made by the Butler-Volmer model.

Faster charging

In this study, the researchers also showed that they could tune intercalation rates by changing the composition of the electrolyte. For example, swapping in different anions can lower the amount of energy needed to transfer the lithium and electron, making the process more efficient.

“Tuning the intercalation kinetics by changing electrolytes offers great opportunities to enhance the reaction rates, alter electrode designs, and therefore enhance the battery power and energy,” Shao-Horn says.

Shao-Horn’s lab and their collaborators have been using automated experiments to make and test thousands of different electrolytes, which are used to develop machine-learning models to predict electrolytes with enhanced functions.

The findings could also help researchers to design batteries that would charge faster, by speeding up the lithium intercalation reaction. Another goal is reducing the side reactions that can cause battery degradation when electrons are picked off the electrode and dissolve into the electrolyte.

“If you want to do that rationally, not just by trial and error, you need some kind of theoretical framework to know what are the important material parameters that you can play with,” Bazant says. “That’s what this paper tries to provide.”

The research was funded by Shell International Exploration and Production and the Toyota Research Institute through the D3BATT Center for Data-Driven Design of Rechargeable Batteries.


A cysteine-rich diet may promote regeneration of the intestinal lining, study suggests

The findings may offer a new way to help heal tissue damage from radiation or chemotherapy treatment.


A diet rich in the amino acid cysteine may have rejuvenating effects in the small intestine, according to a new study from MIT. This amino acid, the researchers discovered, can turn on an immune signaling pathway that helps stem cells to regrow new intestinal tissue.

This enhanced regeneration may help to heal injuries from radiation, which often occur in patients undergoing radiation therapy for cancer. The research was conducted in mice, but if future research shows similar results in humans, then delivering elevated quantities of cysteine, through diet or supplements, could offer a new strategy to help damaged tissue heal faster, the researchers say.

“The study suggests that if we give these patients a cysteine-rich diet or cysteine supplementation, perhaps we can dampen some of the chemotherapy or radiation-induced injury,” says Omer Yilmaz, director of the MIT Stem Cell Initiative, an associate professor of biology at MIT, and a member of MIT’s Koch Institute for Integrative Cancer Research. “The beauty here is we’re not using a synthetic molecule; we’re exploiting a natural dietary compound.”

While previous research has shown that certain types of diets, including low-calorie diets, can enhance intestinal stem cell activity, the new study is the first to identify a single nutrient that can help intestinal cells to regenerate.

Yilmaz is the senior author of the study, which appears today in Nature. Koch Institute postdoc Fangtao Chi is the paper’s lead author.

Boosting regeneration

It is well-established that diet can affect overall health: High-fat diets can lead to obesity, diabetes, and other health problems, while low-calorie diets have been shown to extend lifespans in many species. In recent years, Yilmaz’s lab has investigated how different types of diets influence stem cell regeneration, and found that high-fat diets, as well as short periods of fasting, can enhance stem cell activity in different ways.

“We know that macro diets such as high-sugar diets, high-fat diets, and low-calorie diets have a clear impact on health. But at the granular level, we know much less about how individual nutrients impact stem cell fate decisions, as well as tissue function and overall tissue health,” Yilmaz says.

In their new study, the researchers began by feeding mice a diet high in one of 20 different amino acids, the building blocks of proteins. For each group, they measured how the diet affected intestinal stem cell regeneration. Among these amino acids, cysteine had the most dramatic effects on stem cells and progenitor cells (immature cells that differentiate into adult intestinal cells).

Further studies revealed that cysteine initiates a chain of events leading to the activation of a population of immune cells called CD8 T cells. When cells in the lining of the intestine absorb cysteine from digested food, they convert it into CoA, a cofactor that is released into the mucosal lining of the intestine. There, CD8 T cells absorb CoA, which stimulates them to begin proliferating and producing a cytokine called IL-22.

IL-22 is an important player in the regulation of intestinal stem cell regeneration, but until now, it wasn’t known that CD8 T cells can produce it to boost intestinal stem cells. Once activated, those IL-22-releasing T cells are primed to help combat any kind of injury that could occur within the intestinal lining.

“What’s really exciting here is that feeding mice a cysteine-rich diet leads to the expansion of an immune cell population that we typically don’t associate with IL-22 production and the regulation of intestinal stemness,” Yilmaz says. “What happens in a cysteine-rich diet is that the pool of cells that make IL-22 increases, particularly the CD8 T-cell fraction.”

These T cells tend to congregate within the lining of the intestine, so they are already in position when needed. The researchers found that the stimulation of CD8 T cells occurred primarily in the small intestine, not in any other part of the digestive tract, which they believe is because most of the protein that we consume is absorbed by the small intestine.

Healing the intestine

In this study, the researchers showed that regeneration stimulated by a cysteine-rich diet could help to repair radiation damage to the intestinal lining. Also, in work that has not been published yet, they showed that a high-cysteine diet had a regenerative effect following treatment with a chemotherapy drug called 5-fluorouracil. This drug, which is used to treat colon and pancreatic cancers, can also damage the intestinal lining.

Cysteine is found in many high-protein foods, including meat, dairy products, legumes, and nuts. The body can also synthesize its own cysteine, by converting the amino acid methionine to cysteine — a process that takes place in the liver. However, cysteine produced in the liver is distributed through the entire body and doesn’t lead to a buildup in the small intestine the way that consuming cysteine in the diet does.

“With our high-cysteine diet, the gut is the first place that sees a high amount of cysteine,” Chi says.

Cysteine has been previously shown to have antioxidant effects, which are also beneficial, but this study is the first to demonstrate its effect on intestinal stem cell regeneration. The researchers now hope to study whether it may also help other types of stem cells regenerate new tissues. In one ongoing study, they are investigating whether cysteine might stimulate hair follicle regeneration.

They also plan to further investigate some of the other amino acids that appear to influence stem cell regeneration.

“I think we’re going to uncover multiple new mechanisms for how these amino acids regulate cell fate decisions and gut health in the small intestine and colon,” Yilmaz says.

The research was funded, in part, by the National Institutes of Health, the V Foundation, the Koch Institute Frontier Research Program via the Kathy and Curt Marble Cancer Research Fund, the Bridge Project — a partnership between the Koch Institute for Integrative Cancer Research at MIT and the Dana-Farber/Harvard Cancer Center, the American Federation for Aging Research, the MIT Stem Cell Initiative, and the Koch Institute Support (core) Grant from the National Cancer Institute.


MIT cognitive scientists reveal why some sentences stand out from others

Sentences that are highly dissimilar from anything we’ve seen before are more likely to be remembered accurately.


“You still had to prove yourself.”

“Every cloud has a blue lining!”

Which of those sentences are you most likely to remember a few minutes from now? If you guessed the second, you’re probably correct.

According to a new study from MIT cognitive scientists, sentences that stick in your mind longer are those that have distinctive meanings, making them stand out from sentences you’ve previously seen. They found that meaning, not any other trait, is the most important feature when it comes to memorability.

“One might have thought that when you remember sentences, maybe it’s all about the visual features of the sentence, but we found that that was not the case. A big contribution of this paper is pinning down that it is the meaning-related space that makes sentences memorable,” says Greta Tuckute PhD ’25, who is now a research fellow at Harvard University’s Kempner Institute.

The findings support the hypothesis that sentences with distinctive meanings — like “Does olive oil work for tanning?” — are stored in brain space that is not cluttered with sentences that mean almost the same thing. Sentences with similar meanings end up densely packed together and are therefore more difficult to recognize confidently later on, the researchers believe.

“When you encode sentences that have a similar meaning, there’s feature overlap in that space. Therefore, a particular sentence you’ve encoded is not linked to a unique set of features, but rather to a whole bunch of features that may overlap with other sentences,” says Evelina Fedorenko, an MIT associate professor of brain and cognitive sciences (BCS), a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Tuckute and Thomas Clark, an MIT graduate student, are the lead authors of the paper, which appears in the Journal of Memory and Language. MIT graduate student Bryan Medina is also an author.

Distinctive sentences

What makes certain things more memorable than others is a longstanding question in cognitive science and neuroscience. In a 2011 study, Aude Oliva, now a senior research scientist at MIT and MIT director of the MIT-IBM Watson AI Lab, showed that not all items are created equal: Some types of images are much easier to remember than others, and people are remarkably consistent in what images they remember best.

In that study, Oliva and her colleagues found that, in general, images with people in them are the most memorable, followed by images of human-scale space and close-ups of objects. Least memorable are natural landscapes.

As a follow-up to that study, Fedorenko and Oliva, along with Ted Gibson, another faculty member in BCS, teamed up to determine if words also vary in their memorability. In a study published earlier this year, co-led by Tuckute and Kyle Mahowald, a former PhD student in BCS, the researchers found that the most memorable words are those that have the most distinctive meanings.

Words are categorized as being more distinctive if they have a single meaning, and few or no synonyms — for example, words like “pineapple” or “avalanche” which were found to be very memorable. On the other hand, words that can have multiple meanings, such as “light,” or words that have many synonyms, like “happy,” were more difficult for people to recognize accurately.

In the new study, the researchers expanded their scope to analyze the memorability of sentences. Just like words, some sentences have very distinctive meanings, while others communicate similar information in slightly different ways.

To do the study, the researchers assembled a collection of 2,500 sentences drawn from publicly available databases that compile text from novels, news articles, movie dialogues, and other sources. Each sentence that they chose contained exactly six words.

The researchers then presented a random selection of about 1,000 of these sentences to each study participant, including repeats of some sentences. Each of the 500 participants in the study was asked to press a button when they saw a sentence that they remembered seeing earlier.

The most memorable sentences — the ones where participants accurately and quickly indicated that they had seen them before — included strings such as “Homer Simpson is hungry, very hungry,” and “These mosquitoes are — well, guinea pigs.”

Those memorable sentences overlapped significantly with sentences that were determined as having distinctive meanings as estimated through the high-dimensional vector space of a large language model (LLM) known as Sentence BERT. That model is able to generate sentence-level representations of sentences, which can be used for tasks like judging meaning similarity between sentences. This model provided researchers with a distinctness score for each sentence based on its semantic similarity to other sentences.

The researchers also evaluated the sentences using a model that predicts memorability based on the average memorability of the individual words in the sentence. This model performed fairly well at predicting overall sentence memorability, but not as well as Sentence BERT. This suggests that the meaning of a sentence as a whole — above and beyond the contributions from individual words — determines how memorable it will be, the researchers say.

Noisy memories

While cognitive scientists have long hypothesized that the brain’s memory banks have a limited capacity, the findings of the new study support an alternative hypothesis that would help to explain how the brain can continue forming new memories without losing old ones.

This alternative, known as the noisy representation hypothesis, says that when the brain encodes a new memory, be it an image, a word, or a sentence, it is represented in a noisy way — that is, this representation is not identical to the stimulus, and some information is lost. For example, for an image, you may not encode the exact viewing angle at which an object is shown, and for a sentence, you may not remember the exact construction used.

Under this theory, a new sentence would be encoded in a similar part of the memory space as sentences that carry a similar meanings, whether they were encountered recently or sometime across a lifetime of language experience. This jumbling of similar meanings together increases the amount of noise and can make it much harder, later on, to remember the exact sentence you have seen before.

“The representation is gradually going to accumulate some noise. As a result, when you see an image or a sentence for a second time, your accuracy at judging whether you’ve seen it before will be affected, and it’ll be less than 100 percent in most cases,” Clark says.

However, if a sentence has a unique meaning that is encoded in a less densely crowded space, it will be easier to pick out later on.

“Your memory may still be noisy, but your ability to make judgments based on the representations is less affected by that noise because the representation is so distinctive to begin with,” Clark says.

The researchers now plan to study whether other features of sentences, such as more vivid and descriptive language, might also contribute to making them more memorable, and how the language system may interact with the hippocampal memory structures during the encoding and retrieval of memories.

The research was funded, in part, by the National Institutes of Health, the McGovern Institute, the Department of Brain and Cognitive Sciences, the Simons Center for the Social Brain, and the MIT Quest for Intelligence.


MIT joins in constructing the Giant Magellan Telescope

The major public-private partnership is expected to strengthen MIT research and US leadership in astronomy and engineering.


The following article is adapted from a joint press release issued today by MIT and the Giant Magellan Telescope.

MIT is lending its support to the Giant Magellan Telescope, joining the international consortium to advance the $2.6 billion observatory in Chile. The Institute’s participation, enabled by a transformational gift from philanthropists Phillip (Terry) Ragon ’72 and Susan Ragon, adds to the momentum to construct the Giant Magellan Telescope, whose 25.4-meter aperture will have five times the light-collecting area and up to 200 times the power of existing observatories.

“As philanthropists, Terry and Susan have an unerring instinct for finding the big levers: those interventions that truly transform the scientific landscape,” says MIT President Sally Kornbluth. “We saw this with their founding of the Ragon Institute, which pursues daring approaches to harnessing the immune system to prevent and cure human diseases. With today’s landmark gift, the Ragons enable an equally lofty mission to better understand the universe — and we could not be more grateful for their visionary support."

MIT will be the 16th member of the international consortium advancing the Giant Magellan Telescope and the 10th participant based in the United States. Together, the consortium has invested $1 billion in the observatory — the largest-ever private investment in ground-based astronomy. The Giant Magellan Telescope is already 40 percent under construction, with major components being designed and manufactured across 36 U.S. states.

“MIT is honored to join the consortium and participate in this exceptional scientific endeavor,” says Ian A. Waitz, MIT’s vice president for research. “The Giant Magellan Telescope will bring tremendous new capabilities to MIT astronomy and to U.S. leadership in fundamental science. The construction of this uniquely powerful telescope represents a vital private and public investment in scientific excellence for decades to come.”

MIT brings to the consortium powerful scientific capabilities and a legacy of astronomical excellence. MIT’s departments of Physics and of Earth, Atmospheric and Planetary Sciences, and the MIT Kavli Institute for Astrophysics and Space Research, are internationally recognized for research in exoplanets, cosmology, and environments of extreme gravity, such as black holes and compact binary stars. MIT’s involvement will strengthen the Giant Magellan Telescope’s unique capabilities in high-resolution spectroscopy, adaptive optics, and the search for life beyond Earth. It also deepens a long-standing scientific relationship: MIT is already a partner in the existing twin Magellan Telescopes at Las Campanas Observatory in Chile — one of the most scientifically valuable observing sites on Earth, and the same site where the Giant Magellan Telescope is now under construction.

“Since Galileo’s first spyglass, the world’s largest telescope has doubled in aperture every 40 to 50 years,” says Robert A. Simcoe, director of the MIT Kavli Institute and the Francis L. Friedman Professor of Physics. “Each generation’s leading instruments have resolved important scientific questions of the day and then surprised their builders with new discoveries not yet even imagined, helping humans understand our place in the universe. Together with the Giant Magellan Telescope, MIT is helping to realize our generation’s contribution to this lineage, consistent with our mission to advance the frontier of fundamental science by undertaking the most audacious and advanced engineering challenges.”

Contributing to the national strategy

MIT’s support comes at a pivotal time for the observatory. In June 2025, the National Science Foundation (NSF) advanced the Giant Magellan Telescope into its Final Design Phase, one of the final steps before it becomes eligible for federal construction funding. To demonstrate readiness and a strong commitment to U.S. leadership, the consortium offered to privately fund this phase, which is traditionally supported by the NSF.

MIT’s investment is an integral part of the national strategy to secure U.S. access to the next generation of research facilities known as “extremely large telescopes.” The Giant Magellan Telescope is a core partner in the U.S. Extremely Large Telescope Program, the nation’s top priority in astronomy. The National Academies’ Astro2020 Decadal Survey called the program “absolutely essential if the United States is to maintain a position as a leader in ground-based astronomy.” This long-term strategy also includes the recently commissioned Vera C. Rubin Observatory in Chile. Rubin is scanning the sky to detect rare, fast-changing cosmic events, while the Giant Magellan Telescope will provide the sensitivity, resolution, and spectroscopic instruments needed to study them in detail. Together, these Southern Hemisphere observatories will give U.S. scientists the tools they need to lead 21st-century astrophysics.

“Without direct access to the Giant Magellan Telescope, the U.S. risks falling behind in fundamental astronomy, as Rubin’s most transformational discoveries will be utilized by other nations with access to their own ‘extremely large telescopes’ under development,” says Walter Massey, board chair of the Giant Magellan Telescope.

MIT’s participation brings the United States a step closer to completing the promise of this powerful new observatory on a globally competitive timeline. With federal construction funding, it is expected that the observatory could reach 90 percent completion in less than two years and become operational by the 2030s.

“MIT brings critical expertise and momentum at a time when global leadership in astronomy hangs in the balance,” says Robert Shelton, president of the Giant Magellan Telescope. “With MIT, we are not just adding a partner; we are accelerating a shared vision for the future and reinforcing the United States’ position at the forefront of science.”

Other members of the Giant Magellan Telescope consortium include the University of Arizona, Carnegie Institution for Science, The University of Texas at Austin, Korea Astronomy and Space Science Institute, University of Chicago, São Paulo Research Foundation (FAPESP), Texas A&M University, Northwestern University, Harvard University, Astronomy Australia Ltd., Australian National University, Smithsonian Institution, Weizmann Institute of Science, Academia Sinica Institute of Astronomy and Astrophysics, and Arizona State University.

A boon for astrophysics research and education

Access to the world’s best optical telescopes is a critical resource for MIT researchers. More than 150 individual science programs at MIT have relied on major astronomical observatories in the past three years, engaging faculty, researchers, and students in investigations into the marvels of the universe. Recent research projects have included chemical studies of the universe’s oldest stars, led by Professor Anna Frebel; spectroscopy of stars shredded by dormant black holes, led by Professor Erin Kara; and measurements of a white dwarf teetering on the precipice of a black hole, led by Professor Kevin Burdge. 

“Over many decades, researchers at the MIT Kavli Institute have used unparalleled instruments to discover previously undetected cosmic phenomena from both ground-based observations and spaceflight missions,” says Nergis Mavalvala, dean of the MIT School of Science and the Curtis (1963) and Kathleen Marble Professor of Astrophysics. “I have no doubt our brilliant colleagues will carry on that tradition with the Giant Magellan Telescope, and I can’t wait to see what they will discover next.”

The Giant Magellan Telescope will also provide a platform for advanced R&D in remote sensing, creating opportunities to build custom infrared and optical spectrometers and high-speed imagers to further study our universe.

“One cannot have a leading physics program without a leading astrophysics program. Access to time on the Giant Magellan Telescope will ensure that future generations of MIT researchers will continue to work at the forefront of astrophysical discovery for decades to come,” says Deepto Chakrabarty, head of the MIT Department of Physics, the William A. M. Burden Professor in Astrophysics, and principal investigator at the MIT Kavli Institute. “Our institutional access will help attract and retain top researchers in astrophysics, planetary science, and advanced optics, and will give our PhD students and postdocs unrivaled educational opportunities.”


The first animals on Earth may have been sea sponges, study suggests

MIT researchers traced chemical fossils in ancient rocks to the ancestors of modern-day demosponges.


A team of MIT geochemists has unearthed new evidence in very old rocks suggesting that some of the first animals on Earth were likely ancestors of the modern sea sponge.

In a study appearing today in the Proceedings of the National Academy of Sciences, the researchers report that they have identified “chemical fossils” that may have been left by ancient sponges in rocks that are more than 541 million years old. A chemical fossil is a remnant of a biomolecule that originated from a living organism that has since been buried, transformed, and preserved in sediment, sometimes for hundreds of millions of years.

The newly identified chemical fossils are special types of steranes, which are the geologically stable form of sterols, such as cholesterol, that are found in the cell membranes of complex organisms. The researchers traced these special steranes to a class of sea sponges known as demosponges. Today, demosponges come in a huge variety of sizes and colors, and live throughout the oceans as soft and squishy filter feeders. Their ancient counterparts may have shared similar characteristics.

“We don’t know exactly what these organisms would have looked like back then, but they absolutely would have lived in the ocean, they would have been soft-bodied, and we presume they didn’t have a silica skeleton,” says Roger Summons, the Schlumberger Professor of Geobiology Emeritus in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS).

The group’s discovery of sponge-specific chemical fossils offers strong evidence that the ancestors of demosponges were among the first animals to evolve, and that they likely did so much earlier than the rest of Earth’s major animal groups.

The study’s authors, including Summons, are lead author and former MIT EAPS Crosby Postdoctoral Fellow Lubna Shawar, who is now a research scientist at Caltech, along with Gordon Love from the University of California at Riverside, Benjamin Uveges of Cornell University, Alex Zumberge of GeoMark Research in Houston, Paco Cárdenas of Uppsala University in Sweden, and José-Luis Giner of the State University of New York College of Environmental Science and Forestry.

Sponges on steroids

The new study builds on findings that the group first reported in 2009. In that study, the team identified the first chemical fossils that appeared to derive from ancient sponges. They analyzed rock samples from an outcrop in Oman and found a surprising abundance of steranes that they determined were the preserved remnants of 30-carbon (C30) sterols — a rare form of steroid that they showed was likely derived from ancient sea sponges.

The steranes were found in rocks that were very old and formed during the Ediacaran Period — which spans from roughly 541 million to about 635 million years ago. This period took place just before the Cambrian, when the Earth experienced a sudden and global explosion of complex multicellular life. The team’s discovery suggested that ancient sponges appeared much earlier than most multicellular life, and were possibly one of Earth’s first animals.

However, soon after these findings were released, alternative hypotheses swirled to explain the C30 steranes’ origins, including that the chemicals could have been generated by other groups of organisms or by nonliving geological processes.

The team says the new study reinforces their earlier hypothesis that ancient sponges left behind this special chemical record, as they have identified a new chemical fossil in the same Precambrian rocks that is almost certainly biological in origin.

Building evidence

Just as in their previous work, the researchers looked for chemical fossils in rocks that date back to the Ediacaran Period. They acquired samples from drill cores and outcrops in Oman, western India, and Siberia, and analyzed the rocks for signatures of steranes, the geologically stable form of sterols found in all eukaryotes (plants, animals, and any organism with a nucleus and membrane-bound organelles).

“You’re not a eukaryote if you don’t have sterols or comparable membrane lipids,” Summons says.

A sterol’s core structure consists of four fused carbon rings. Additional carbon side chain and chemical add-ons can attach to and extend a sterol’s structure, depending on what an organism’s particular genes can produce. In humans, for instance, the sterol cholesterol contains 27 carbon atoms, while the sterols in plants generally have 29 carbon atoms.

“It’s very unusual to find a sterol with 30 carbons,” Shawar says.

The chemical fossil the researchers identified in 2009 was a 30-carbon sterol. What’s more, the team determined that the compound could be synthesized because of the presence of a distinctive enzyme which is encoded by a gene that is common to demosponges.

In their new study, the team focused on the chemistry of these compounds and realized the same sponge-derived gene could produce an even rarer sterol, with 31 carbon atoms (C31). When they analyzed their rock samples for C31 steranes, they found it in surprising abundance, along with the aforementioned C30 steranes.

“These special steranes were there all along,” Shawar says. “It took asking the right questions to seek them out and to really understand their meaning and from where they come.”

The researchers also obtained samples of modern-day demosponges and analyzed them for C31 sterols. They found that, indeed, the sterols — biological precursors of the C31 steranes found in rocks — are present in some species of contemporary demosponges. Going a step further, they chemically synthesized eight different C31 sterols in the lab as reference standards to verify their chemical structures. Then, they processed the molecules in ways that simulate how the sterols would change when deposited, buried, and pressurized over hundreds of millions of years. They found that the products of only two such sterols were an exact match with the form of C31 sterols that they found in ancient rock samples. The presence of two and the absence of the other six demonstrates that these compounds were not produced by a random nonbiological process.

The findings, reinforced by multiple lines of inquiry, strongly support the idea that the steranes that were found in ancient rocks were indeed produced by living organisms, rather than through geological processes. What’s more, those organisms were likely the ancestors of demosponges, which to this day have retained the ability to produce the same series of compounds.

“It’s a combination of what’s in the rock, what’s in the sponge, and what you can make in a chemistry laboratory,” Summons says. “You’ve got three supportive, mutually agreeing lines of evidence, pointing to these sponges being among the earliest animals on Earth.”

“In this study we show how to authenticate a biomarker, verifying that a signal truly comes from life rather than contamination or non-biological chemistry,” Shawar adds.

Now that the team has shown C30 and C31 sterols are reliable signals of ancient sponges, they plan to look for the chemical fossils in ancient rocks from other regions of the world. They can only tell from the rocks they’ve sampled so far that the sediments, and the sponges, formed some time during the Ediacaran Period. With more samples, they will have a chance to narrow in on when some of the first animals took form.

This research was supported, in part, by the MIT Crosby Fund, the Distinguished Postdoctoral Fellowship program, the Simons Foundation Collaboration on the Origins of Life, and the NASA Exobiology Program. 


How the brain splits up vision without you even noticing

As an object moves across your field of view, the brain seamlessly hands off visual processing from one hemisphere to the other like cell phone towers or relay racers do, a new MIT study shows.


The brain divides vision between its two hemispheres — what’s on your left is processed by your right hemisphere, and vice versa — but your experience with every bike or bird that you see zipping by is seamless. A new study by neuroscientists at The Picower Institute for Learning and Memory at MIT reveals how the brain handles the transition.

“It’s surprising to some people to hear that there’s some independence between the hemispheres, because that doesn’t really correspond to how we perceive reality,” says Earl K. Miller, Picower Professor in the Picower Institute and MIT’s Department of Brain and Cognitive Sciences. “In our consciousness, everything seems to be unified.”

There are advantages to separately processing vision on either side of the brain, including the ability to keep track of more things at once, Miller and other researchers have found, but neuroscientists have been eager to fully understand how perception ultimately appears so unified in the end.

Led by Picower Fellow Matthew Broschard and Research Scientist Jefferson Roy, the research team measured neural activity in the brains of animals as they tracked objects crossing their field of view. The results reveal that different frequencies of brain waves encoded and then transferred information from one hemisphere to the other in advance of the crossing, and then held on to the object representation in both hemispheres until after the crossing was complete. The process is analogous to how relay racers hand off a baton, how a child swings from one monkey bar to the next, and how cellphone towers hand off a call from one to the next as a train passenger travels through their area. In all cases, both towers or hands actively hold what’s being transferred until the handoff is confirmed.

Witnessing the handoff

To conduct the study, published Sept. 19 in the Journal of Neuroscience, the researchers measured both the electrical spiking of individual neurons and the various frequencies of brain waves that emerge from the coordinated activity of many neurons. They studied the dorsal and ventrolateral prefrontal cortex in both hemispheres, brain areas associated with executive brain functions.

The power fluctuations of the wave frequencies in each hemisphere told the researchers a clear story about how the subject’s brains transferred information from the “sending” to the “receiving” hemisphere whenever a target object crossed the middle of their field of view. In the experiments, the target was accompanied by a distractor object on the opposite side of the screen to confirm that the subjects were consciously paying attention to the target object’s motion, and not just indiscriminately glancing at whatever happened to pop up on to the screen.

The highest-frequency “gamma” waves, which encode sensory information, peaked in both hemispheres when the subjects first looked at the screen and again when the two objects appeared. When a color change signaled which object was the target to track, the gamma increase was only evident in the “sending” hemisphere (on the opposite side as the target object), as expected. Meanwhile, the power of somewhat lower-frequency “beta” waves, which regulate when gamma waves are active, varied inversely with the gamma waves. These sensory encoding dynamics were stronger in the ventrolateral locations compared to the dorsolateral ones.

Meanwhile, two distinct bands of lower-frequency waves showed greater power in the dorsolateral locations at key moments related to achieving the handoff. About a quarter of a second before a target object crossed the middle of the field of view, “alpha” waves ramped up in both hemispheres and then peaked just after the object crossed. Meanwhile, “theta” band waves peaked after the crossing was complete, only in the “receiving” hemisphere (opposite from the target’s new position).

Accompanying the pattern of wave peaks, neuron spiking data showed how the brain’s representation of the target’s location traveled. Using decoder software, which interprets what information the spikes represent, the researchers could see the target representation emerge in the sending hemisphere’s ventrolateral location when it was first cued by the color change. Then they could see that as the target neared the middle of the field of view, the receiving hemisphere joined the sending hemisphere in representing the object, so that they both encoded the information during the transfer.

Doing the wave

Taken together, the results showed that after the sending hemisphere initially encoded the target with a ventrolateral interplay of beta and gamma waves, a dorsolateral ramp up of alpha waves caused the receiving hemisphere to anticipate the handoff by mirroring the sending hemisphere’s encoding of the target information. Alpha peaked just after the target crossed the middle of the field of view, and when the handoff was complete, theta peaked in the receiving hemisphere as if to say, “I got it.”

And in trials where the target never crossed the middle of the field of view, these handoff dynamics were not apparent in the measurements.

The study shows that the brain is not simply tracking objects in one hemisphere and then just picking them up anew when they enter the field of view of the other hemisphere.

“These results suggest there are active mechanisms that transfer information between cerebral hemispheres,” the authors wrote. “The brain seems to anticipate the transfer and acknowledge its completion.”

But they also note, based on other studies, that the system of interhemispheric coordination can sometimes appear to break down in certain neurological conditions including schizophrenia, autism, depression, dyslexia, and multiple sclerosis. The new study may lend insight into the specific dynamics needed for it to succeed.

In addition to Broschard, Roy, and Miller, the paper’s other authors are Scott Brincat and Meredith Mahnke.

Funding for the study came from the Office of Naval Research, the National Eye Institute of the National Institutes of Health, The Freedom Together Foundation, and The Picower Institute for Learning and Memory.


By attracting the world’s sharpest talent, MIT helps keep the US a step ahead

MIT is a global community whose international engagement bestows benefits well beyond the Cambridge campus.


Just as the United States has prospered through its ability to draw talent from every corner of the globe, so too has MIT thrived as a magnet for the world’s most keen and curious minds — many of whom remain here to invent solutions, create companies, and teach future leaders, contributing to America’s success.

President Ronald Reagan remarked in 1989 that the United States leads the world “because, unique among nations, we draw our people — our strength — from every country and every corner of the world. And by doing so we continuously renew and enrich our nation.” Those words ring still ring true 36 years later — and the sentiment resonates especially at MIT.

"To find people with the drive, skill, and daring to see, discover, and invent things no one else can, we open ourselves to talent from every corner of the United States and from around the globe,” says MIT President Sally Kornbluth. “MIT is an American university, proudly so — but we would be gravely diminished without the students and scholars who join us from other nations."

MIT’s steadfast commitment to attracting the best and brightest talent from around the world has contributed to not just its own success, but also that of the nation as whole. MIT’s stature as an international hub of education and innovation adds value to the U.S. economy and competitiveness in myriad ways — from foreign-born faculty delivering breakthroughs here and founding American companies that create American jobs to international students contributing over $264 million annually to the U.S. economy during the 2023-24 school year.

Highlighting the extent and value of its global character, the Office of the Vice Provost for International Activities recently expanded a new video series, “The World at MIT.” In it, 20 faculty members born outside the United States tell how they dreamed of coming to MIT while growing up abroad and eventually joined the MIT faculty, where they’ve helped establish and maintain global leadership in science while teaching the next generation of innovators. A common thread running through their stories is the importance of the campus’s distinct nature as a community that is both profoundly American and deeply connected to the people, institutions, and concerns of regions and nations around the globe.

Joining the MIT faculty in 1980, MIT President Emeritus L. Rafael Reif knew almost instantly that he would stay.

“I was impressed by the richness of the variety of groups of people and cultures here,” says Reif, who moved to the United States from Venezuela and eventually served as MIT’s president from 2012 to 2022. “There is no richer place than MIT, because every point of view is here. That is what makes the place so special.”

The benefits of welcoming international students and researchers to campus extend well beyond MIT. More than 17,000 MIT alumni born elsewhere now call the United States home, for example, and many have founded U.S.-based companies that have generated billions of dollars in economic activity.

Contributing to America’s prestige internationally, one-third of MIT’s 104 Nobel laureates — including seven of the eight Nobel winners over the last decade — were born abroad. Drawn to MIT, they went on to make their breakthroughs in the United States. Among them is Lester Wolfe Professor of Chemistry Moungi Bawendi, who won the Nobel Prize in Chemistry in 2023 for his work in the chemical production of high-quality quantum dots.   

“MIT is a great environment. It’s very collegial, very collaborative. As a result, we also have amazing students,” says Bawendi, who lived in France and Tunisia as a child before moving to the U.S. “I couldn’t have done my first three years here, which eventually got me a Nobel Prize, without having really bold, smart, adventurous graduate students.”

The give-and-take among MIT faculty and students also inspires electrical engineering and computer science professor Akintunde Ibitayo (Tayo) Akinwande, who grew up in Nigeria.

“Anytime I teach a class, I always learn something from my students’ probing questions,” Akinwande says. “It gives me new insights sometimes, and that’s always the kind of environment I like — where I’m learning something new all the time.”

MIT’s global vibe inspires its students to not only explore worlds of ideas in campus labs and classrooms, but to journey the world itself. Forty-three percent of undergraduates pursued international experiences during the last academic year — taking courses at foreign universities, conducting research, or interning at multinational companies. MIT students and faculty alike are regularly engaged in research outside the United States, addressing some of the world’s toughest challenges and devising solutions that can be deployed back home, as well as abroad. In so doing, they embody MIT’s motto of “mens et manus” (“mind and hand”), reflecting the educational ideals of MIT’s founders who promoted education for practical application.

As someone who loves exploring “lofty questions” along with the practical design of things, Nergis Mavalvala found a perfect fit at MIT and calls her position as the Marble Professor of Astrophysics and dean of the School of Science “the best job in the world.”

“Everybody here wants to make the world a better place and are using their intellectual gifts and their education to do so,” says Mavalvala, who emigrated from Pakistan. “And I think that’s an amazing community to be part of.”

Daniela Rus agrees. Now the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science and director of MIT’s Computer Science and Artificial Intelligence Laboratory, Rus was drawn to the practical application of mathematics while still a student in her native Romania.   

“And so, now here I am at MIT, essentially bringing together the world of science and math with the world of making things,” Rus says. “I’ve been here for two decades, and it’s been an extraordinary journey.”

The daughter of an Albert Einstein afficionado, Yukiko Yamashita grew up in Japan thinking of science not as a job, but a calling. MIT, where she is a professor of biology, is a place where people “are really open to unconventional ideas” and “intellectual freedom” thrives.

“There is something sacred about doing science. That’s how I grew up,” Yamashita says. “There are some distinct MIT characteristics. In a good way, people can’t let go. Every day, I am creating more mystery than I answer.”

For more about the paths that brought Yamashita and others to MIT and stories of how their disparate personal histories enrich the campus and wider community, visit the “World at MIT” videos website.

“Our global community’s multiplicity of ideas, experiences, and perspectives contributes enormously to MIT’s innovative and entrepreneurial spirit and, by extension, to the innovation and competitiveness of the U.S.,” says Vice Provost for International Activities Duane Boning, whose department developed the video series. “The bottom line is that both MIT and the U.S. grow stronger when we harness the talents of the world’s best and brightest.”


MIT engineers develop a magnetic transistor for more energy-efficient electronics

A new device concept opens the door to compact, high-performance transistors with built-in memory.


Transistors, the building blocks of modern electronics, are typically made of silicon. Because it’s a semiconductor, this material can control the flow of electricity in a circuit. But silicon has fundamental physical limits that restrict how compact and energy-efficient a transistor can be.

MIT researchers have now replaced silicon with a magnetic semiconductor, creating a magnetic transistor that could enable smaller, faster, and more energy-efficient circuits. The material’s magnetism strongly influences its electronic behavior, leading to more efficient control of the flow of electricity. 

The team used a novel magnetic material and an optimization process that reduces the material’s defects, which boosts the transistor’s performance.

The material’s unique magnetic properties also allow for transistors with built-in memory, which would simplify circuit design and unlock new applications for high-performance electronics.

“People have known about magnets for thousands of years, but there are very limited ways to incorporate magnetism into electronics. We have shown a new way to efficiently utilize magnetism that opens up a lot of possibilities for future applications and research,” says Chung-Tao Chou, an MIT graduate student in the departments of Electrical Engineering and Computer Science (EECS) and Physics, and co-lead author of a paper on this advance.

Chou is joined on the paper by co-lead author Eugene Park, a graduate student in the Department of Materials Science and Engineering (DMSE); Julian Klein, a DMSE research scientist; Josep Ingla-Aynes, a postdoc in the MIT Plasma Science and Fusion Center; Jagadeesh S. Moodera, a senior research scientist in the Department of Physics; and senior authors Frances Ross, TDK Professor in DMSE; and Luqiao Liu, an associate professor in EECS, and a member of the Research Laboratory of Electronics; as well as others at the University of Chemistry and Technology in Prague. The paper appears today in Physical Review Letters.

Overcoming the limits

In an electronic device, silicon semiconductor transistors act like tiny light switches that turn a circuit on and off, or amplify weak signals in a communication system. They do this using a small input voltage.

But a fundamental physical limit of silicon semiconductors prevents a transistor from operating below a certain voltage, which hinders its energy efficiency.

To make more efficient electronics, researchers have spent decades working toward magnetic transistors that utilize electron spin to control the flow of electricity. Electron spin is a fundamental property that enables electrons to behave like tiny magnets.

So far, scientists have mostly been limited to using certain magnetic materials. These lack the favorable electronic properties of semiconductors, constraining device performance.

“In this work, we combine magnetism and semiconductor physics to realize useful spintronic devices,” Liu says.

The researchers replace the silicon in the surface layer of a transistor with chromium sulfur bromide, a two-dimensional material that acts as a magnetic semiconductor.

Due to the material’s structure, researchers can switch between two magnetic states very cleanly. This makes it ideal for use in a transistor that smoothly switches between “on” and “off.”

“One of the biggest challenges we faced was finding the right material. We tried many other materials that didn’t work,” Chou says.

They discovered that changing these magnetic states modifies the material’s electronic properties, enabling low-energy operation. And unlike many other 2D materials, chromium sulfur bromide remains stable in air.

To make a transistor, the researchers pattern electrodes onto a silicon substrate, then carefully align and transfer the 2D material on top. They use tape to pick up a tiny piece of material, only a few tens of nanometers thick, and place it onto the substrate.

“A lot of researchers will use solvents or glue to do the transfer, but transistors require a very clean surface. We eliminate all those risks by simplifying this step,” Chou says.

Leveraging magnetism

This lack of contamination enables their device to outperform existing magnetic transistors. Most others can only create a weak magnetic effect, changing the flow of current by a few percent or less. Their new transistor can switch or amplify the electric current by a factor of 10.

They use an external magnetic field to change the magnetic state of the material, switching the transistor using significantly less energy than would usually be required.

The material also allows them to control the magnetic states with electric current. This is important because engineers cannot apply magnetic fields to individual transistors in an electronic device. They need to control each one electrically.

The material’s magnetic properties could also enable transistors with built-in memory, simplifying the design of logic or memory circuits.

A typical memory device has a magnetic cell to store information and a transistor to read it out. Their method can combine both into one magnetic transistor.

“Now, not only are transistors turning on and off, they are also remembering information. And because we can switch the transistor with greater magnitude, the signal is much stronger so we can read out the information faster, and in a much more reliable way,” Liu says.

Building on this demonstration, the researchers plan to further study the use of electrical current to control the device. They are also working to make their method scalable so they can fabricate arrays of transistors.

This research was supported, in part, by the Semiconductor Research Corporation, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. National Science Foundation (NSF), the U.S. Department of Energy, the U.S. Army Research Office, and the Czech Ministry of Education, Youth, and Sports. The work was partially carried out at the MIT.nano facilities.


MIT affiliates win AI for Math grants to accelerate mathematical discovery

Department of Mathematics researchers David Roe and Andrew Sutherland seek to advance automated theorem proving; four additional MIT alumni also awarded.


MIT Department of Mathematics researchers David Roe ’06 and Andrew Sutherland ’90, PhD ’07 are among the inaugural recipients of the Renaissance Philanthropy and XTX Markets’ AI for Math grants

Four additional MIT alumni — Anshula Gandhi ’19, Viktor Kunčak SM ’01, PhD ’07; Gireeja Ranade ’07; and Damiano Testa PhD ’05 — were also honored for separate projects.

The first 29 winning projects will support mathematicians and researchers at universities and organizations working to develop artificial intelligence systems that help advance mathematical discovery and research across several key tasks.

Roe and Sutherland, along with Chris Birkbeck of the University of East Anglia, will use their grant to boost automated theorem proving by building connections between the L-Functions and Modular Forms Database (LMFDB) and the Lean4 mathematics library (mathlib).

“Automated theorem provers are quite technically involved, but their development is under-resourced,” says Sutherland. With AI technologies such as large language models (LLMs), the barrier to entry for these formal tools is dropping rapidly, making formal verification frameworks accessible to working mathematicians. 

Mathlib is a large, community-driven mathematical library for the Lean theorem prover, a formal system that verifies the correctness of every step in a proof. Mathlib currently contains on the order of 105 mathematical results (such as lemmas, propositions, and theorems). The LMFDB, a massive, collaborative online resource that serves as a kind of “encyclopedia” of modern number theory, contains more than 109 concrete statements. Sutherland and Roe are managing editors of the LMFDB.

Roe and Sutherland’s grant will be used for a project that aims to augment both systems, making the LMFDB’s results available within mathlib as assertions that have not yet been formally proved, and providing precise formal definitions of the numerical data stored within the LMFDB. This bridge will benefit both human mathematicians and AI agents, and provide a framework for connecting other mathematical databases to formal theorem-proving systems.

The main obstacles to automating mathematical discovery and proof are the limited amount of formalized math knowledge, the high cost of formalizing complex results, and the gap between what is computationally accessible and what is feasible to formalize.

To address these obstacles, the researchers will use the funding to build tools for accessing the LMFDB from mathlib, making a large database of unformalized mathematical knowledge accessible to a formal proof system. This approach enables proof assistants to identify specific targets for formalization without the need to formalize the entire LMFDB corpus in advance.

“Making a large database of unformalized number-theoretic facts available within mathlib will provide a powerful technique for mathematical discovery, because the set of facts an agent might wish to consider while searching for a theorem or proof is exponentially larger than the set of facts that eventually need to be formalized in actually proving the theorem,” says Roe.

The researchers note that proving new theorems at the frontier of mathematical knowledge often involves steps that rely on a nontrivial computation. For example, Andrew Wiles’ proof of Fermat’s Last Theorem uses what is known as the “3-5 trick” at a crucial point in the proof.

“This trick depends on the fact that the modular curve X_0(15) has only finitely many rational points, and none of those rational points correspond to a semi-stable elliptic curve,” according to Sutherland. “This fact was known well before Wiles’ work, and is easy to verify using computational tools available in modern computer algebra systems, but it is not something one can realistically prove using pencil and paper, nor is it necessarily easy to formalize.”

While formal theorem provers are being connected to computer algebra systems for more efficient verification, tapping into computational outputs in existing mathematical databases offers several other benefits.

Using stored results leverages the thousands of CPU-years of computation time already spent in creating the LMFDB, saving money that would be needed to redo these computations. Having precomputed information available also makes it feasible to search for examples or counterexamples without knowing ahead of time how broad the search can be. In addition, mathematical databases are curated repositories, not simply a random collection of facts. 

“The fact that number theorists emphasized the role of the conductor in databases of elliptic curves has already proved to be crucial to one notable mathematical discovery made using machine learning tools: murmurations,” says Sutherland.

“Our next steps are to build a team, engage with both the LMFDB and mathlib communities, start to formalize the definitions that underpin the elliptic curve, number field, and modular form sections of the LMFDB, and make it possible to run LMFDB searches from within mathlib,” says Roe. “If you are an MIT student interested in getting involved, feel free to reach out!” 


What does the future hold for generative AI?

At the inaugural MIT Generative AI Impact Consortium Symposium, researchers and business leaders discussed potential advancements centered on this powerful technology.


When OpenAI introduced ChatGPT to the world in 2022, it brought generative artificial intelligence into the mainstream and started a snowball effect that led to its rapid integration into industry, scientific research, health care, and the everyday lives of people who use the technology.

What comes next for this powerful but imperfect tool?

With that question in mind, hundreds of researchers, business leaders, educators, and students gathered at MIT’s Kresge Auditorium for the inaugural MIT Generative AI Impact Consortium (MGAIC) Symposium on Sept. 17 to share insights and discuss the potential future of generative AI.

“This is a pivotal moment — generative AI is moving fast. It is our job to make sure that, as the technology keeps advancing, our collective wisdom keeps pace,” said MIT Provost Anantha Chandrakasan to kick off this first symposium of the MGAIC, a consortium of industry leaders and MIT researchers launched in February to harness the power of generative AI for the good of society.

Underscoring the critical need for this collaborative effort, MIT President Sally Kornbluth said that the world is counting on faculty, researchers, and business leaders like those in MGAIC to tackle the technological and ethical challenges of generative AI as the technology advances.

“Part of MIT’s responsibility is to keep these advances coming for the world. … How can we manage the magic [of generative AI] so that all of us can confidently rely on it for critical applications in the real world?” Kornbluth said.

To keynote speaker Yann LeCun, chief AI scientist at Meta, the most exciting and significant advances in generative AI will most likely not come from continued improvements or expansions of large language models like Llama, GPT, and Claude. Through training, these enormous generative models learn patterns in huge datasets to produce new outputs.

Instead, LuCun and others are working on the development of “world models” that learn the same way an infant does — by seeing and interacting with the world around them through sensory input.

“A 4-year-old has seen as much data through vision as the largest LLM. … The world model is going to become the key component of future AI systems,” he said.

A robot with this type of world model could learn to complete a new task on its own with no training. LeCun sees world models as the best approach for companies to make robots smart enough to be generally useful in the real world.

But even if future generative AI systems do get smarter and more human-like through the incorporation of world models, LeCun doesn’t worry about robots escaping from human control.

Scientists and engineers will need to design guardrails to keep future AI systems on track, but as a society, we have already been doing this for millennia by designing rules to align human behavior with the common good, he said.

“We are going to have to design these guardrails, but by construction, the system will not be able to escape those guardrails,” LeCun said.

Keynote speaker Tye Brady, chief technologist at Amazon Robotics, also discussed how generative AI could impact the future of robotics.

For instance, Amazon has already incorporated generative AI technology into many of its warehouses to optimize how robots travel and move material to streamline order processing.

He expects many future innovations will focus on the use of generative AI in collaborative robotics by building machines that allow humans to become more efficient.

“GenAI is probably the most impactful technology I have witnessed throughout my whole robotics career,” he said.

Other presenters and panelists discussed the impacts of generative AI in businesses, from largescale enterprises like Coca-Cola and Analog Devices to startups like health care AI company Abridge.

Several MIT faculty members also spoke about their latest research projects, including the use of AI to reduce noise in ecological image data, designing new AI systems that mitigate bias and hallucinations, and enabling LLMs to learn more about the visual world.

After a day spent exploring new generative AI technology and discussing its implications for the future, MGAIC faculty co-lead Vivek Farias, the Patrick J. McGovern Professor at MIT Sloan School of Management, said he hoped attendees left with “a sense of possibility, and urgency to make that possibility real.”


Inflammation jolts “sleeping” cancer cells awake, enabling them to multiply again

Chemotherapy-induced injury of organ tissue causes inflammation that awakens dormant cancer cells, which may cause new tumors to form.


Cancer cells have one relentless goal: to grow and divide. While most stick together within the original tumor, some rogue cells break away to traverse to distant organs. There, they can lie dormant — undetectable and not dividing — for years, like landmines waiting to go off.

This migration of cancer cells, called metastasis, is especially common in breast cancer. For many patients, the disease can return months — or even decades — after initial treatment, this time in an entirely different organ.

Robert Weinberg, the Daniel K. Ludwig Professor for Cancer Research at MIT and a Whitehead Institute for Biomedical Research founding member, has spent decades unraveling the complex biology of metastasis and pursuing research that could improve survival rates among patients with metastatic breast cancer — or prevent metastasis altogether.

In his latest study, Weinberg, postdoc Jingwei Zhang, and colleagues ask a critical question: What causes these dormant cancer cells to erupt into a frenzy of growth and division? The group’s findings, published Sept. 1 in The Proceedings of the National Academy of Sciences (PNAS), point to a unique culprit.

This awakening of dormant cancer cells, they’ve discovered, isn’t a spontaneous process. Instead, the wake-up call comes from the inflamed tissue surrounding the cells. One trigger for this inflammation is bleomycin, a common chemotherapy drug that can scar and thicken lung tissue.

“The inflammation jolts the dormant cancer cells awake,” Weinberg says. “Once awakened, they start multiplying again, seeding new life-threatening tumors in the body.”

Decoding metastasis

There’s a lot that scientists still don’t know about metastasis, but this much is clear: Cancer cells must undergo a long and arduous journey to achieve it. The first step is to break away from their neighbors within the original tumor.

Normally, cells stick to one another using surface proteins that act as molecular “velcro,” but some cancer cells can acquire genetic changes that disrupt the production of these proteins and make them more mobile and invasive, allowing them to detach from the parent tumor. 

Once detached, they can penetrate blood vessels and lymphatic channels, which act as highways to distant organs.

While most cancer cells die at some point during this journey, a few persist. These cells exit the bloodstream and invade different tissues—lungs, liver, bone, and even the brain — to give birth to new, often more-aggressive tumors.

“Almost 90 percent of cancer-related deaths occur not from the original tumor, but when cancer cells spread to other parts of the body,” says Weinberg, who is a member of the Koch Institute for Integrative Cancer Research at MIT and the MIT Stem Cell Initiative. “This is why it’s so important to understand how these ‘sleeping’ cancer cells can wake up and start growing again.”

Setting up shop in new tissue comes with changes in surroundings — the “tumor microenvironment” — to which the cancer cells may not be well-suited. These cells face constant threats, including detection and attack by the immune system. 

To survive, they often enter a protective state of dormancy that puts a pause on growth and division. This dormant state also makes them resistant to conventional cancer treatments, which often target rapidly dividing cells.

To investigate what makes this dormancy reversible months or years down the line, researchers in the Weinberg Lab injected human breast cancer cells into mice. These cancer cells were modified to produce a fluorescent protein, allowing the scientists to track their behavior in the body.

The group then focused on cancer cells that had lodged themselves in the lung tissue. By examining them for specific proteins — Ki67, ITGB4, and p63 — that act as markers of cell activity and state, the researchers were able to confirm that these cells were in a non-dividing, dormant state.

Previous work from the Weinberg Lab had shown that inflammation in organ tissue can provoke dormant breast cancer cells to start growing again. In this study, the team tested bleomycin — a chemotherapy drug known to cause lung inflammation — that can be given to patients after surgery to lower the risk of cancer recurrence.

The researchers found that lung inflammation from bleomycin was sufficient to trigger the growth of large lung cancer colonies in treated mice — and to shift the character of these once-dormant cells to those that are more invasive and mobile.

Zeroing in on the tumor microenvironment, the team identified a type of immune cells, called M2 macrophages, as drivers of this process. These macrophages release molecules called epidermal growth factor receptor (EGFR) ligands, which bind to receptors on the surface of dormant cancer cells. This activates a cascade of signals that provoke dormant cancer cells to start multiplying rapidly. 

But EGFR signaling is only the initial spark that ignites the fire. “We found that once dormant cancer cells are awakened, they retain what we call an ‘awakening memory,’” Zhang says. “They no longer require ongoing inflammatory signals from the microenvironment to stay active [growing and multiplying] — they remember the awakened state.”

While signals related to inflammation are necessary to awaken dormant cancer cells, exactly how much signaling is needed remains unclear. “This aspect of cancer biology is particularly challenging, because multiple signals contribute to the state change in these dormant cells,” Zhang says.

The team has already identified one key player in the awakening process, but understanding the full set of signals and how each contributes is far more complex — a question they are continuing to investigate in their new work. 

Studying these pivotal changes in the lives of cancer cells — such as their transition from dormancy to active growth — will deepen our scientific understanding of metastasis and, as researchers in the Weinberg Lab hope, lead to more effective treatments for patients with metastatic cancers.

This work was supported in part by the MIT Stem Cell Initiative.


Could a primordial black hole’s last burst explain a mysteriously energetic neutrino?

If a new proposal by MIT physicists bears out, the recent detection of a record-setting neutrino could be the first evidence of elusive Hawking radiation.


The last gasp of a primordial black hole may be the source of the highest-energy “ghost particle” detected to date, a new MIT study proposes.

In a paper appearing today in Physical Review Letters, MIT physicists put forth a strong theoretical case that a recently observed, highly energetic neutrino may have been the product of a primordial black hole exploding outside our solar system.

Neutrinos are sometimes referred to as ghost particles, for their invisible yet pervasive nature: They are the most abundant particle type in the universe, yet they leave barely a trace. Scientists recently identified signs of a neutrino with the highest energy ever recorded, but the source of such an unusually powerful particle has yet to be confirmed.

The MIT researchers propose that the mysterious neutrino may have come from the inevitable explosion of a primordial black hole. Primordial black holes (PBHs) are hypothetical black holes that are microscopic versions of the much more massive black holes that lie at the center of most galaxies. PBHs are theorized to have formed in the first moments following the Big Bang. Some scientists believe that primordial black holes could constitute most or all of the dark matter in the universe today.

Like their more massive counterparts, PBHs should leak energy and shrink over their lifetimes, in a process known as Hawking radiation, which was predicted by the physicist Stephen Hawking. The more a black hole radiates, the hotter it gets and the more high-energy particles it releases. This is a runaway process that should produce an incredibly violent explosion of the most energetic particles just before a black hole evaporates away.

The MIT physicists calculate that, if PBHs make up most of the dark matter in the universe, then a small subpopulation of them would be undergoing their final explosions today throughout the Milky Way galaxy. And, there should be a statistically significant possibility that such an explosion could have occurred relatively close to our solar system. The explosion would have released a burst of high-energy particles, including neutrinos, one of which could have had a good chance of hitting a detector on Earth.

If such a scenario had indeed occurred, the recent detection of the highest-energy neutrino would represent the first observation of Hawking radiation, which has long been assumed, but has never been directly observed from any black hole. What’s more, the event might indicate that primordial black holes exist and that they make up most of dark matter — a mysterious substance that comprises 85 percent of the total matter in the universe, the nature of which remains unknown.

“It turns out there’s this scenario where everything seems to line up, and not only can we show that most of the dark matter [in this scenario] is made of primordial black holes, but we can also produce these high-energy neutrinos from a fluke nearby PBH explosion,” says study lead author Alexandra Klipfel, a graduate student in MIT’s Department of Physics. “It’s something we can now try to look for and confirm with various experiments.”

The study’s other co-author is David Kaiser, professor of physics and the Germeshausen Professor of the History of Science at MIT.

High-energy tension

In February, scientists at the Cubic Kilometer Neutrino Telescope, or KM3NeT, reported the detection of the highest-energy neutrino recorded to date. KM3NeT is a large-scale underwater neutrino detector located at the bottom of the Mediterranean Sea, where the environment is meant to mute the effects of any particles other than neutrinos.

The scientists operating the detector picked up signatures of a passing neutrino with an energy of over 100 peta-electron-volts. One peta-electron volt is equivalent to the energy of 1 quadrillion electron volts.

“This is an incredibly high energy, far beyond anything humans are capable of accelerating particles up to,” Klipfel says. “There’s not much consensus on the origin of such high-energy particles.”

Similarly high-energy neutrinos, though not as high as what KM3NeT observed, have been detected by the IceCube Observatory — a neutrino detector embedded deep in the ice at the South Pole. IceCube has detected about half a dozen such neutrinos, whose unusually high energies have also eluded explanation. Whatever their source, the IceCube observations enable scientists to work out a plausible rate at which neutrinos of those energies typically hit Earth. If this rate were correct, however, it would be extremely unlikely to have seen the ultra-high-energy neutrino that KM3NeT recently detected. The two detectors’ discoveries, then, seemed to be what scientists call “in tension.”

Kaiser and Klipfel, who had been working on a separate project involving primordial black holes, wondered: Could a PBH have produced both the KM3NeT neutrino and the handful of IceCube neutrinos, under conditions in which PBHs comprise most of the dark matter in the galaxy? If they could show a chance existed, it would raise an even more exciting possibility — that both observatories observed not only high-energy neutrinos but also the remnants of Hawking radiation.

“Our best chance”

The first step the scientists took in their theoretical analysis was to calculate how many particles would be emitted by an exploding black hole. All black holes should slowly radiate over time. The larger a black hole, the colder it is, and the lower-energy particles it emits as it slowly evaporates. Thus, any particles that are emitted as Hawking radiation from heavy stellar-mass black holes would be near impossible to detect. By the same token, however, much smaller primordial black holes would be very hot and emit high-energy particles in a process that accelerates the closer the black hole gets to disappearing entirely.

“We don’t have any hope of detecting Hawking radiation from astrophysical black holes,” Klipfel says. “So if we ever want to see it, the smallest primordial black holes are our best chance.”

The researchers calculated the number and energies of particles that a black hole should emit, given its temperature and shrinking mass. In its final nanosecond, they estimate that once a black hole is smaller than an atom, it should emit a final burst of particles, including about 1020 neutrinos, or about a sextillion of the particles, with energies of about 100 peta-electron-volts (around the energy that KM3NeT observed).

They used this result to calculate the number of PBH explosions that would have to occur in a galaxy in order to explain the reported IceCube results. They found that, in our region of the Milky Way galaxy, about 1,000 primordial black holes should be exploding per cubic parsec per year. (A parsec is a unit of distance equal to about 3 light years, which is more than 10 trillion kilometers.)

They then calculated the distance at which one such explosion in the Milky Way could have occurred, such that just a handful of the high-energy neutrinos could have reached Earth and produced the recent KM3NeT event. They find that a PBH would have to explode relatively close to our solar system — at a distance about 2,000 times further than the distance between the Earth and our sun.

The particles emitted from such a nearby explosion would radiate in all directions. However, the team found there is a small, 8 percent chance that an explosion can happen close enough to the solar system, once every 14 years, such that enough ultra-high-energy neutrinos hit the Earth.

“An 8 percent chance is not terribly high, but it’s well within the range for which we should take such chances seriously — all the more so because so far, no other explanation has been found that can account for both the unexplained very-high-energy neutrinos and the even more surprising ultra-high-energy neutrino event,” Kaiser says.

The team’s scenario seems to hold up, at least in theory. To confirm their idea will require many more detections of particles, including neutrinos at “insanely high energies.” Then, scientists can build up better statistics regarding such rare events.

“In that case, we could use all of our combined experience and instrumentation, to try to measure still-hypothetical Hawking radiation,” Kaiser says. “That would provide the first-of-its-kind evidence for one of the pillars of our understanding of black holes — and could account for these otherwise anomalous high-energy neutrino events as well. That’s a very exciting prospect!”

In tandem, other efforts to detect nearby PBHs could further bolster the hypothesis that these unusual objects make up most or all of the dark matter.

This work was supported, in part, by the National Science Foundation, MIT’s Center for Theoretical Physics – A Leinweber Institute, and the U.S. Department of Energy.


A more precise way to edit the genome

MIT researchers have dramatically lowered the error rate of prime editing, a technique that holds potential for treating many genetic disorders.


A genome-editing technique known as prime editing holds potential for treating many diseases by transforming faulty genes into functional ones. However, the process carries a small chance of inserting errors that could be harmful.

MIT researchers have now found a way to dramatically lower the error rate of prime editing, using modified versions of the proteins involved in the process. This advance could make it easier to develop gene therapy treatments for a variety of diseases, the researchers say.

“This paper outlines a new approach to doing gene editing that doesn’t complicate the delivery system and doesn’t add additional steps, but results in a much more precise edit with fewer unwanted mutations,” says Phillip Sharp, an MIT Institute Professor Emeritus, a member of MIT’s Koch Institute for Integrative Cancer Research, and one of the senior authors of the new study.

With their new strategy, the MIT team was able to improve the error rate of prime editors from about one error in seven edits to one in 101 for the most-used editing mode, or from one error in 122 edits to one in 543 for a high-precision mode.

“For any drug, what you want is something that is effective, but with as few side effects as possible,” says Robert Langer, the David H. Koch Institute Professor at MIT, a member of the Koch Institute, and one of the senior authors of the new study. “For any disease where you might do genome editing, I would think this would ultimately be a safer, better way of doing it.”

Koch Institute research scientist Vikash Chauhan is the lead author of the paper, which appears today in Nature.

The potential for error

The earliest forms of gene therapy, first tested in the 1990s, involved delivering new genes carried by viruses. Subsequently, gene-editing techniques that use enzymes such as zinc finger nucleases to correct genes were developed. These nucleases are difficult to engineer, however, so adapting them to target different DNA sequences is a very laborious process.

Many years later, the CRISPR genome-editing system was discovered in bacteria, offering scientists a potentially much easier way to edit the genome. The CRISPR system consists of an enzyme called Cas9 that can cut double-stranded DNA at a particular spot, along with a guide RNA that tells Cas9 where to cut. Researchers have adapted this approach to cut out faulty gene sequences or to insert new ones, following an RNA template.

In 2019, researchers at the Broad Institute of MIT and Harvard reported the development of prime editing: a new system, based on CRISPR, that is more precise and has fewer off-target effects. A recent study reported that prime editors were successfully used to treat a patient with chronic granulomatous disease (CGD), a rare genetic disease that affects white blood cells.

“In principle, this technology could eventually be used to address many hundreds of genetic diseases by correcting small mutations directly in cells and tissues,” Chauhan says.

One of the advantages of prime editing is that it doesn’t require making a double-stranded cut in the target DNA. Instead, it uses a modified version of Cas9 that cuts just one of the complementary strands, opening up a flap where a new sequence can be inserted. A guide RNA delivered along with the prime editor serves as the template for the new sequence.

Once the new sequence has been copied, however, it must compete with the old DNA strand to be incorporated into the genome. If the old strand outcompetes the new one, the extra flap of new DNA hanging off may accidentally get incorporated somewhere else, giving rise to errors.

Many of these errors might be relatively harmless, but it’s possible that some could eventually lead to tumor development or other complications. With the most recent version of prime editors, this error rate ranges from one per seven edits to one per 121 edits for different editing modes.

“The technologies we have now are really a lot better than earlier gene therapy tools, but there’s always a chance for these unintended consequences,” Chauhan says.

Precise editing

To reduce those error rates, the MIT team decided to take advantage of a phenomenon they had observed in a 2023 study. In that paper, they found that while Cas9 usually cuts in the same DNA location every time, some mutated versions of the protein show a relaxation of those constraints. Instead of always cutting the same location, those Cas9 proteins would sometimes make their cut one or two bases further along the DNA sequence.

This relaxation, the researchers discovered, makes the old DNA strands less stable, so they get degraded, making it easier for the new strands to be incorporated without introducing any errors.

In the new study, the researchers were able to identify Cas9 mutations that dropped the error rate to 1/20th its original value. Then, by combining pairs of those mutations, they created a Cas9 editor that lowered the error rate even further, to 1/36th the original amount.

To make the editors even more accurate, the researchers incorporated their new Cas9 proteins into a prime editing system that has an RNA binding protein that stabilizes the ends of the RNA template more efficiently. This final editor, which the researchers call vPE, had an error rate just 1/60th of the original, ranging from one in 101 edits to one in 543 edits for different editing modes. These tests were performed in mouse and human cells.

The MIT team is now working on further improving the efficiency of prime editors, through further modifications of Cas9 and the RNA template. They are also working on ways to deliver the editors to specific tissues of the body, which is a longstanding challenge in gene therapy.

They also hope that other labs will begin using the new prime editing approach in their research studies. Prime editors are commonly used to explore many different questions, including how tissues develop, how populations of cancer cells evolve, and how cells respond to drug treatment.

“Genome editors are used extensively in research labs,” Chauhan says. “So the therapeutic aspect is exciting, but we are really excited to see how people start to integrate our editors into their research workflows.”

The research was funded by the Life Sciences Research Foundation, the National Institute of Biomedical Imaging and Bioengineering, the National Cancer Institute, and the Koch Institute Support (core) Grant from the National Cancer Institute.


MIT geologists discover where energy goes during an earthquake

Based on mini “lab-quakes” in a controlled setting, the findings could help researchers assess the vulnerability of quake-prone regions.


The ground-shaking that an earthquake generates is only a fraction of the total energy that a quake releases. A quake can also generate a flash of heat, along with a domino-like fracturing of underground rocks. But exactly how much energy goes into each of these three processes is exceedingly difficult, if not impossible, to measure in the field.

Now MIT geologists have traced the energy that is released by “lab quakes” — miniature analogs of natural earthquakes that are carefully triggered in a controlled laboratory setting. For the first time, they have quantified the complete energy budget of such quakes, in terms of the fraction of energy that goes into heat, shaking, and fracturing.

They found that only about 10 percent of a lab quake’s energy causes physical shaking. An even smaller fraction — less than 1 percent — goes into breaking up rock and creating new surfaces. The overwhelming portion of a quake’s energy — on average 80 percent — goes into heating up the immediate region around a quake’s epicenter. In fact, the researchers observed that a lab quake can produce a temperature spike hot enough to melt surrounding material and turn it briefly into liquid melt.

The geologists also found that a quake’s energy budget depends on a region’s deformation history — the degree to which rocks have been shifted and disturbed by previous tectonic motions. The fractions of quake energy that produce heat, shaking, and rock fracturing can shift depending on what the region has experienced in the past.

“The deformation history — essentially what the rock remembers — really influences how destructive an earthquake could be,” says Daniel Ortega-Arroyo, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “That history affects a lot of the material properties in the rock, and it dictates to some degree how it is going to slip.”

The team’s lab quakes are a simplified analog of what occurs during a natural earthquake. Down the road, their results could help seismologists predict the likelihood of earthquakes in regions that are prone to seismic events. For instance, if scientists have an idea of how much shaking a quake generated in the past, they might be able to estimate the degree to which the quake’s energy also affected rocks deep underground by melting or breaking them apart. This in turn could reveal how much more or less vulnerable the region is to future quakes.

“We could never reproduce the complexity of the Earth, so we have to isolate the physics of what is happening, in these lab quakes,” says Matěj Peč, associate professor of geophysics at MIT. “We hope to understand these processes and try to extrapolate them to nature.”

Peč (pronounced “Peck”) and Ortega-Arroyo reported their results on Aug. 28 in the journal AGU Advances. Their MIT co-authors are Hoagy O’Ghaffari and Camilla Cattania, along with Zheng Gong and Roger Fu at Harvard University and Markus Ohl and Oliver Plümper at Utrecht University in the Netherlands.

Under the surface

Earthquakes are driven by energy that is stored up in rocks over millions of years. As tectonic plates slowly grind against each other, stress accumulates through the crust. When rocks are pushed past their material strength, they can suddenly slip along a narrow zone, creating a geologic fault. As rocks slip on either side of the fault, they produce seismic waves that ripple outward and upward.

We perceive an earthquake’s energy mainly in the form of ground shaking, which can be measured using seismometers and other ground-based instruments. But the other two major forms of a quake’s energy — heat and underground fracturing — are largely inaccessible with current technologies.

“Unlike the weather, where we can see daily patterns and measure a number of pertinent variables, it’s very hard to do that very deep in the Earth,” Ortega-Arroyo says. “We don’t know what’s happening to the rocks themselves, and the timescales over which earthquakes repeat within a fault zone are on the century-to-millenia timescales, making any sort of actionable forecast challenging.”

To get an idea of how an earthquake’s energy is partitioned, and how that energy budget might affect a region’s seismic risk, he and Peč went into the lab. Over the last seven years, Peč’s group at MIT has developed methods and instrumentation to simulate seismic events, at the microscale, in an effort to understand how earthquakes at the macroscale may play out.

“We are focusing on what’s happening on a really small scale, where we can control many aspects of failure and try to understand it before we can do any scaling to nature,” Ortega-Arroyo says.

Microshakes

For their new study, the team generated miniature lab quakes that simulate a seismic slipping of rocks along a fault zone. They worked with small samples of granite, which are representative of rocks in the seismogenic layer — the geologic region in the continental crust where earthquakes typically originate. They ground up the granite into a fine powder and mixed the crushed granite with a much finer powder of magnetic particles, which they used as a sort of internal temperature gauge. (A particle’s magnetic field strength will change in response to a fluctuation in temperature.)

The researchers placed samples of the powdered granite — each about 10 square millimeters and 1 millimeter thin — between two small pistons and wrapped the ensemble in a gold jacket. They then applied a strong magnetic field to orient the powder’s magnetic particles in the same initial direction and to the same field strength. They reasoned that any change in the particles’ orientation and field strength afterward should be a sign of how much heat that region experienced as a result of any seismic event.

Once samples were prepared, the team placed them one at a time into a custom-built apparatus that the researchers tuned to apply steadily increasing pressure, similar to the pressures that rocks experience in the Earth’s seismogenic layer, about 10 to 20 kilometers below the surface. They used custom-made piezoelectric sensors, developed by co-author O’Ghaffari, which they attached to either end of a sample to measure any shaking that occurred as they increased the stress on the sample.

They observed that at certain stresses, some samples slipped, producing a microscale seismic event similar to an earthquake. By analyzing the magnetic particles in the samples after the fact, they obtained an estimate of how much each sample was temporarily heated — a method developed in collaboration with Roger Fu’s lab at Harvard University. They also estimated the amount of shaking each sample experienced, using measurements from the piezoelectric sensor and numerical models. The researchers also examined each sample under the microscope, at different magnifications, to assess how the size of the granite grains changed — whether and how many grains broke into smaller pieces, for instance.

From all these measurements, the team was able to estimate each lab quake’s energy budget. On average, they found that about 80 percent of a quake’s energy goes into heat, while 10 percent generates shaking, and less than 1 percent goes into rock fracturing, or creating new, smaller particle surfaces. 

“In some instances we saw that, close to the fault, the sample went from room temperature to 1,200 degrees Celsius in a matter of microseconds, and then immediately cooled down once the motion stopped,” Ortega-Arroyo says. “And in one sample, we saw the fault move by about 100 microns, which implies slip velocities essentially about 10 meters per second. It moves very fast, though it doesn’t last very long.”

The researchers suspect that similar processes play out in actual, kilometer-scale quakes.

“Our experiments offer an integrated approach that provides one of the most complete views of the physics of earthquake-like ruptures in rocks to date,” Peč says. “This will provide clues on how to improve our current earthquake models and natural hazard mitigation.”

This research was supported, in part, by the National Science Foundation.


This MIT spinout is taking biomolecule storage out of the freezer

Cache DNA has developed technologies that can preserve biomolecules at room temperature to make storing and transporting samples less expensive and more reliable.


Ever since freezers were invented, the life sciences industry has been reliant on them. That’s because many patient samples, drug candidates, and other biologics must be stored and transported in powerful freezers or surrounded by dry ice to remain stable.

The problem was on full display during the Covid-19 pandemic, when truckloads of vaccines had to be discarded because they had thawed during transport. Today, the stakes are even higher. Precision medicine, from CAR-T cell therapies to tumor DNA sequencing that guides cancer treatment, depends on pristine biological samples. Yet a single power outage, shipping delay, or equipment failure can destroy irreplaceable patient samples, setting back treatment by weeks or halting it entirely. In remote areas and developing nations, the lack of reliable cold storage effectively locks out entire populations from these life-saving advances.

Cache DNA wants to set the industry free from freezers. At MIT, the company’s founders created a new way to store and preserve DNA molecules at room temperature. Now the company is building biomolecule preservation technologies that can be used in applications across health care, from routine blood tests and cancer screening to rare disease research and pandemic preparedness.

“We want to challenge the paradigm,” says Cache DNA co-founder and former MIT postdoc James Banal. “Biotech has been reliant on the cold chain for more than 50 years. Why hasn’t that changed? Meanwhile, the cost of DNA sequencing has plummeted from $3 billion for the first human genome to under $200 today. With DNA sequencing and synthesis becoming so cheap and fast, storage and transport have emerged as the critical bottlenecks. It’s like having a supercomputer that still requires punch cards for data input.”

As the company works to preserve biomolecules beyond DNA and scale the production of its kits, co-founders Banal and MIT Professor Mark Bathe believe their technology has the potential to unlock new health insights by making sample storage accessible to scientists around the world.

“Imagine if every human on Earth could contribute to a global biobank, not just those living near million-dollar freezer facilities,” Banal says. “That’s 8 billion biological stories instead of just a privileged few. The cures we’re missing might be hiding in the biomolecules of someone we’ve never been able to reach.”

From quantum computing to “Jurassic Park”

Banal came to MIT from Australia to work as a postdoc under Bathe, a professor in MIT’s Department of Biological Engineering. Banal primarily studied in the MIT-Harvard Center for Excitonics, through which he collaborated with researchers from across MIT.

“I worked on some really wacky stuff, like DNA nanotechnology and its intersection with quantum computing and artificial photosynthesis,” Banal recalls.

Another project focused on using DNA to store data. While computers store data as 0s and 1s, DNA can store the same information using the nucleotides A, T, G, and C, allowing for extremely dense storage of data: By one estimate, 1 gram of DNA can hold up to 215 petabytes of data.

After three years of work, in 2021, Banal and Bathe created a system that stored DNA-based data in tiny glass particles. They founded Cache DNA the same year, securing the intellectual property by working with MIT’s Technology Licensing Office, applying the technology to storing clinical nucleic acid samples as well as DNA data. Still, the technology was too nascent to be used for most commercial applications at the time.

Professor of chemistry Jeremiah Johnson had a different approach. His research had shown that certain plastics and rubbers could be made recyclable by adding cleavable molecular bonds. Johnson thought Cache DNA’s technology could be faster and more reliable using his amber-like polymers, similar to how researchers in the “Jurassic Park” movie recover ancient dinosaur DNA from a tree’s fossilized amber resin.

“It started basically as a fun conversation along the halls of Building 16,” Banal recalls. “He’d seen my work, and I was aware of the innovations in his lab.”

Banal immediately saw the potential. He was familiar with the burden of the cold chain. For his MIT experiments, he’d store samples in big freezers kept at -80 degrees Celsius. Samples would sometimes get lost in the freezer or be buried in the inevitable ice build-up. Even when they were perfectly preserved, samples could degrade as they thawed.

As part of a collaboration between Cache DNA and MIT, Banal, Johnson, and two researchers in Johnson’s lab developed a polymer that stores DNA at room temperature. In a nod to their inspiration, they demonstrated the approach by encoding DNA sequences with the “Jurassic Park” theme song.

The researchers’ polymers could encompass a material as a liquid and then form a solid, glass-like block when heated. To release the DNA, the researchers could add a molecule called cysteamine and a special detergent. The researchers showed the process could work to store and access all 50,000 base pairs of a human genome without causing damage.

“Real amber is not great at preservation. It’s porous and lets in moisture and air,” Banal says. “What we built is completely different: a dense polymer network that forms an impenetrable barrier around DNA. Think of it like vacuum-sealing, but at the molecular level. The polymer is so hydrophobic that water and enzymes that would normally destroy DNA simply can’t get through.”

As that research was taking shape, Cache DNA was learning that sample storage was a huge problem from hospitals and research labs. In places like Florida and Singapore, researchers said contending with the effects of humidity on samples was another constant headache. Other researchers across the globe wanted to know if the technology would help them collect samples outside of the lab.

“Hospitals told us they were running out of space,” Banal says. “They had to throw samples out, limit sample collection, and as a last-case scenario, they would use a decades-old storage technology that leads to degradation after a short period of time. It became a north star for us to solve those problems.”

A new tool for precision health

Last year, Cache DNA sent out more than 100 of its first alpha DNA preservation kits to researchers around the world.

“We didn’t tell researchers what to use it for, and our minds were blown by the use cases,” Banal says. “Some used it for collecting samples in the field where cold shipping wasn't feasible. Others evaluated for long term archival storage. The applications were different, but the problem was universal: They all needed reliable storage without the constraint of refrigeration.”

Cache DNA has developed an entire suite of preservation technologies that can be optimized for different storage scenarios. The company also recently received a grant from the National Science Foundation to expand its technology to preserve a broader swath of biomolecules, including RNA and proteins, which could yield new insights into health and disease.

“This important innovation helps eliminate the cold chain and has the potential to unlock millions of genetic samples globally for Cache DNA to empower personalized medicine,” Bathe says. “Eliminating the cold chain is half the equation. The other half is scaling from thousands to millions or even billions of nucleic acid samples. Together, this could enable the equivalent of a ‘Google Books’ for nucleic acids stored at room temperature, either for clinical samples in hospital settings and remote regions of the world, or alternatively to facilitate DNA data storage and retrieval at scale.”

“Freezers have dictated where science could happen,” Banal says. “Remove that constraint, and you start to crack open possibilities: island nations studying their unique genetics without samples dying in transit; every rare disease patient worldwide contributing to research, not just those near major hospitals; the 2 billion people without reliable electricity finally joining global health studies. Room-temperature storage isn’t the whole answer, but every cure starts with a sample that survived the journey.”


DOE selects MIT to establish a Center for the Exascale Simulation of Coupled High-Enthalpy Fluid–Solid Interactions

The research center, sponsored by the DOE’s National Nuclear Security Administration, will advance the simulation of extreme environments, such as those in hypersonic flight and atmospheric reentry.


The U.S. Department of Energy’s National Nuclear Security Administration (DOE/NNSA) recently announced that it has selected MIT to establish a new research center dedicated to advancing the predictive simulation of extreme environments, such as those encountered in hypersonic flight and atmospheric re-entry. The center will be part of the fourth phase of NNSA's Predictive Science Academic Alliance Program (PSAAP-IV), which supports frontier research advancing the predictive capabilities of high-performance computing for open science and engineering applications relevant to national security mission spaces.

The Center for the Exascale Simulation of Coupled High-Enthalpy Fluid–Solid Interactions (CHEFSI) — a joint effort of the MIT Center for Computational Science and Engineering, the MIT Schwarzman College of Computing, and the MIT Institute for Soldier Nanotechnologies (ISN) — plans to harness cutting-edge exascale supercomputers and next-generation algorithms to simulate with unprecedented detail how extremely hot, fast-moving gaseous and solid materials interact. The understanding of these extreme environments — characterized by temperatures of more than 1,500 degrees Celsius and speeds as high as Mach 25 — and their effect on vehicles is central to national security, space exploration, and the development of advanced thermal protection systems.

“CHEFSI will capitalize on MIT’s deep strengths in predictive modeling, high-performance computing, and STEM education to help ensure the United States remains at the forefront of scientific and technological innovation,” says Ian A. Waitz, MIT’s vice president for research. “The center’s particular relevance to national security and advanced technologies exemplifies MIT’s commitment to advancing research with broad societal benefit.”

CHEFSI is one of five new Predictive Simulation Centers announced by the NNSA as part of a program expected to provide up to $17.5 million to each center over five years.

CHEFSI’s research aims to couple detailed simulations of high-enthalpy gas flows with models of the chemical, thermal, and mechanical behavior of solid materials, capturing phenomena such as oxidation, nitridation, ablation, and fracture. Advanced computational models — validated by carefully designed experiments — can address the limitations of flight testing by providing critical insights into material performance and failure.

“By integrating high-fidelity physics models with artificial intelligence-based surrogate models, experimental validation, and state-of-the-art exascale computational tools, CHEFSI will help us understand and predict how thermal protection systems perform under some of the harshest conditions encountered in engineering systems,” says Raúl Radovitzky, the Jerome C. Hunsaker Professor of Aeronautics and Astronautics, associate director of the ISN, and director of CHEFSI. “This knowledge will help in the design of resilient systems for applications ranging from reusable spacecraft to hypersonic vehicles.”

Radovitzky will be joined on the center’s leadership team by Youssef Marzouk, the Breene M. Kerr (1951) Professor of Aeronautics and Astronautics, co-director of the MIT Center for Computational Science and Engineering (CCSE), and recently named the associate dean of the MIT Schwarzman College of Computing; and Nicolas Hadjiconstantinou, the Quentin Berg (1937) Professor of Mechanical Engineering and co-director of CCSE, who will serve as associate directors. The center co-principal investigators include MIT faculty members across the departments of Aeronautics and Astronautics, Electrical Engineering and Computer Science, Materials Science and Engineering, Mathematics, and Mechanical Engineering. Franklin Hadley will lead center operations, with administration and finance under the purview of Joshua Freedman. Hadley and Freedman are both members of the ISN headquarters team. 

CHEFSI expects to collaborate extensively with the DoE/NNSA national laboratories — Lawrence Livermore National Laboratory, Los Alamos National Laboratory, and Sandia National Laboratories — and, in doing so, offer graduate students and postdocs immersive research experiences and internships at these facilities.


Ten years later, LIGO is a black-hole hunting machine

LIGO, Virgo, and KAGRA celebrate the anniversary of the first detection of gravitational waves and announce verification of Stephen Hawking’s black hole area theorem.


The following article is adapted from a press release issued by the Laser Interferometer Gravitational-wave Observatory (LIGO) Laboratory. LIGO is funded by the National Science Foundation and operated by Caltech and MIT, which conceived and built the project.

On Sept. 14, 2015, a signal arrived on Earth, carrying information about a pair of remote black holes that had spiraled together and merged. The signal had traveled about 1.3 billion years to reach us at the speed of light — but it was not made of light. It was a different kind of signal: a quivering of space-time called gravitational waves first predicted by Albert Einstein 100 years prior. On that day 10 years ago, the twin detectors of the U.S. National Science Foundation Laser Interferometer Gravitational-wave Observatory (NSF LIGO) made the first-ever direct detection of gravitational waves, whispers in the cosmos that had gone unheard until that moment.

The historic discovery meant that researchers could now sense the universe through three different means. Light waves, such as X-rays, optical, radio, and other wavelengths of light, as well as high-energy particles called cosmic rays and neutrinos, had been captured before, but this was the first time anyone had witnessed a cosmic event through the gravitational warping of space-time. For this achievement, first dreamed up more than 40 years prior, three of the team’s founders won the 2017 Nobel Prize in Physics: MIT’s Rainer Weiss, professor emeritus of physics (who recently passed away at age 92); Caltech’s Barry Barish, the Ronald and Maxine Linde Professor of Physics, Emeritus; and Caltech’s Kip Thorne, the Richard P. Feynman Professor of Theoretical Physics, Emeritus.

Today, LIGO, which consists of detectors in both Hanford, Washington, and Livingston, Louisiana, routinely observes roughly one black hole merger every three days. LIGO now operates in coordination with two international partners, the Virgo gravitational-wave detector in Italy and KAGRA in Japan. Together, the gravitational-wave-hunting network, known as the LVK (LIGO, Virgo, KAGRA), has captured a total of about 300 black hole mergers, some of which are confirmed while others await further analysis. During the network’s current science run, the fourth since the first run in 2015, the LVK has discovered more than 200 candidate black hole mergers, more than double the number caught in the first three runs.

The dramatic rise in the number of LVK discoveries over the past decade is owed to several improvements to their detectors — some of which involve cutting-edge quantum precision engineering. The LVK detectors remain by far the most precise rulers for making measurements ever created by humans. The space-time distortions induced by gravitational waves are incredibly miniscule. For instance, LIGO detects changes in space-time smaller than 1/10,000 the width of a proton. That’s 1/700 trillionth the width of a human hair.

“Rai Weiss proposed the concept of LIGO in 1972, and I thought, ‘This doesn’t have much chance at all of working,’” recalls Thorne, an expert on the theory of black holes. “It took me three years of thinking about it on and off and discussing ideas with Rai and Vladimir Braginsky [a Russian physicist], to be convinced this had a significant possibility of success. The technical difficulty of reducing the unwanted noise that interferes with the desired signal was enormous. We had to invent a whole new technology. NSF was just superb at shepherding this project through technical reviews and hurdles.”

Nergis Mavalvala, the Curtis and Kathleen Marble Professor of Astrophysics at MIT and dean of the MIT School of Science, says that the challenges the team overcame to make the first discovery are still very much at play. “From the exquisite precision of the LIGO detectors to the astrophysical theories of gravitational-wave sources, to the complex data analyses, all these hurdles had to be overcome, and we continue to improve in all of these areas,” Mavalvala says. “As the detectors get better, we hunger for farther, fainter sources. LIGO continues to be a technological marvel.”

The clearest signal yet

LIGO’s improved sensitivity is exemplified in a recent discovery of a black hole merger referred to as GW250114. (The numbers denote the date the gravitational-wave signal arrived at Earth: January 14, 2025.) The event was not that different from LIGO’s first-ever detection (called GW150914) — both involve colliding black holes about 1.3 billion light-years away with masses between 30 to 40 times that of our sun. But thanks to 10 years of technological advances reducing instrumental noise, the GW250114 signal is dramatically clearer.

“We can hear it loud and clear, and that lets us test the fundamental laws of physics,” says LIGO team member Katerina Chatziioannou, Caltech assistant professor of physics and William H. Hurt Scholar, and one of the authors of a new study on GW250114 published in the Physical Review Letters.

By analyzing the frequencies of gravitational waves emitted by the merger, the LVK team provided the best observational evidence captured to date for what is known as the black hole area theorem, an idea put forth by Stephen Hawking in 1971 that says the total surface areas of black holes cannot decrease. When black holes merge, their masses combine, increasing the surface area. But they also lose energy in the form of gravitational waves. Additionally, the merger can cause the combined black hole to increase its spin, which leads to it having a smaller area. The black hole area theorem states that despite these competing factors, the total surface area must grow in size.

Later, Hawking and physicist Jacob Bekenstein concluded that a black hole’s area is proportional to its entropy, or degree of disorder. The findings paved the way for later groundbreaking work in the field of quantum gravity, which attempts to unite two pillars of modern physics: general relativity and quantum physics.

In essence, the LIGO detection allowed the team to “hear” two black holes growing as they merged into one, verifying Hawking’s theorem. (Virgo and KAGRA were offline during this particular observation.) The initial black holes had a total surface area of 240,000 square kilometers (roughly the size of Oregon), while the final area was about 400,000 square kilometers (roughly the size of California) — a clear increase. This is the second test of the black hole area theorem; an initial test was performed in 2021 using data from the first GW150914 signal, but because that data were not as clean, the results had a confidence level of 95 percent compared to 99.999 percent for the new data.

Thorne recalls Hawking phoning him to ask whether LIGO might be able to test his theorem immediately after he learned of the 2015 gravitational-wave detection. Hawking died in 2018 and sadly did not live to see his theory observationally verified. “If Hawking were alive, he would have reveled in seeing the area of the merged black holes increase,” Thorne says.

The trickiest part of this type of analysis had to do with determining the final surface area of the merged black hole. The surface areas of pre-merger black holes can be more readily gleaned as the pair spiral together, roiling space-time and producing gravitational waves. But after the black holes coalesce, the signal is not as clear-cut. During this so-called ringdown phase, the final black hole vibrates like a struck bell.

In the new study, the researchers precisely measured the details of the ringdown phase, which allowed them to calculate the mass and spin of the black hole and, subsequently, determine its surface area. More specifically, they were able, for the first time, to confidently pick out two distinct gravitational-wave modes in the ringdown phase. The modes are like characteristic sounds a bell would make when struck; they have somewhat similar frequencies but die out at different rates, which makes them hard to identify. The improved data for GW250114 meant that the team could extract the modes, demonstrating that the black hole’s ringdown occurred exactly as predicted by math models based on the Teukolsky formalism — devised in 1972 by Saul Teukolsky, now a professor at Caltech and Cornell University.

Another study from the LVK, submitted to Physical Review Letters today, places limits on a predicted third, higher-pitched tone in the GW250114 signal, and performs some of the most stringent tests yet of general relativity’s accuracy in describing merging black holes.

“A decade of improvements allowed us to make this exquisite measurement,” Chatziioannou says. “It took both of our detectors, in Washington and Louisiana, to do this. I don’t know what will happen in 10 more years, but in the first 10 years, we have made tremendous improvements to LIGO’s sensitivity. This not only means we are accelerating the rate at which we discover new black holes, but we are also capturing detailed data that expand the scope of what we know about the fundamental properties of black holes.”

Jenne Driggers, detection lead senior scientist at LIGO Hanford, adds, “It takes a global village to achieve our scientific goals. From our exquisite instruments, to calibrating the data very precisely, vetting and providing assurances about the fidelity of the data quality, searching the data for astrophysical signals, and packaging all that into something that telescopes can read and act upon quickly, there are a lot of specialized tasks that come together to make LIGO the great success that it is.”

Pushing the limits

LIGO and Virgo have also unveiled neutron stars over the past decade. Like black holes, neutron stars form from the explosive deaths of massive stars, but they weigh less and glow with light. Of note, in August 2017, LIGO and Virgo witnessed an epic collision between a pair of neutron stars — a kilonova — that sent gold and other heavy elements flying into space and drew the gaze of dozens of telescopes around the world, which captured light ranging from high-energy gamma rays to low-energy radio waves. The “multi-messenger” astronomy event marked the first time that both light and gravitational waves had been captured in a single cosmic event. Today, the LVK continues to alert the astronomical community to potential neutron star collisions, who then use telescopes to search the skies for signs of kilonovae.

“The LVK has made big strides in recent years to make sure we’re getting high-quality data and alerts out to the public in under a minute, so that astronomers can look for multi-messenger signatures from our gravitational-wave candidates,” Driggers says.

“The global LVK network is essential to gravitational-wave astronomy,” says Gianluca Gemme, Virgo spokesperson and director of research at the National Institute of Nuclear Physics in Italy. “With three or more detectors operating in unison, we can pinpoint cosmic events with greater accuracy, extract richer astrophysical information, and enable rapid alerts for multi-messenger follow-up. Virgo is proud to contribute to this worldwide scientific endeavor.”

Other LVK scientific discoveries include the first detection of collisions between one neutron star and one black hole; asymmetrical mergers, in which one black hole is significantly more massive than its partner black hole; the discovery of the lightest black holes known, challenging the idea that there is a “mass gap” between neutron stars and black holes; and the most massive black hole merger seen yet with a merged mass of 225 solar masses. For reference, the previous record holder for the most massive merger had a combined mass of 140 solar masses.

Even in the decades before LIGO began taking data, scientists were building foundations that made the field of gravitational-wave science possible. Breakthroughs in computer simulations of black hole mergers, for example, allow the team to extract and analyze the feeble gravitational-wave signals generated across the universe.

LIGO’s technological achievements, beginning as far back as the 1980s, include several far-reaching innovations, such as a new way to stabilize lasers using the so-called Pound–Drever–Hall technique. Invented in 1983 and named for contributing physicists Robert Vivian Pound, the late Ronald Drever of Caltech (a founder of LIGO), and John Lewis Hall, this technique is widely used today in other fields, such as the development of atomic clocks and quantum computers. Other innovations include cutting-edge mirror coatings that almost perfectly reflect laser light; “quantum squeezing” tools that enable LIGO to surpass sensitivity limits imposed by quantum physics; and new artificial intelligence methods that could further hush certain types of unwanted noise.

“What we are ultimately doing inside LIGO is protecting quantum information and making sure it doesn’t get destroyed by external factors,” Mavalvala says. “The techniques we are developing are pillars of quantum engineering and have applications across a broad range of devices, such as quantum computers and quantum sensors.”

In the coming years, the scientists and engineers of LVK hope to further fine-tune their machines, expanding their reach deeper and deeper into space. They also plan to use the knowledge they have gained to build another gravitational-wave detector, LIGO India. Having a third LIGO observatory would greatly improve the precision with which the LVK network can localize gravitational-wave sources.

Looking farther into the future, the team is working on a concept for an even larger detector, called Cosmic Explorer, which would have arms 40 kilometers long. (The twin LIGO observatories have 4-kilometer arms.) A European project, called Einstein Telescope, also has plans to build one or two huge underground interferometers with arms more than 10 kilometers long. Observatories on this scale would allow scientists to hear the earliest black hole mergers in the universe.

“Just 10 short years ago, LIGO opened our eyes for the first time to gravitational waves and changed the way humanity sees the cosmos,” says Aamir Ali, a program director in the NSF Division of Physics, which has supported LIGO since its inception. “There’s a whole universe to explore through this completely new lens and these latest discoveries show LIGO is just getting started.”

The LIGO-Virgo-KAGRA Collaboration

LIGO is funded by the U.S. National Science Foundation and operated by Caltech and MIT, which together conceived and built the project. Financial support for the Advanced LIGO project was led by NSF with Germany (Max Planck Society), the United Kingdom (Science and Technology Facilities Council), and Australia (Australian Research Council) making significant commitments and contributions to the project. More than 1,600 scientists from around the world participate in the effort through the LIGO Scientific Collaboration, which includes the GEO Collaboration. Additional partners are listed at my.ligo.org/census.php.

The Virgo Collaboration is currently composed of approximately 1,000 members from 175 institutions in 20 different (mainly European) countries. The European Gravitational Observatory (EGO) hosts the Virgo detector near Pisa, Italy, and is funded by the French National Center for Scientific Research, the National Institute of Nuclear Physics in Italy, the National Institute of Subatomic Physics in the Netherlands, The Research Foundation – Flanders, and the Belgian Fund for Scientific Research. A list of the Virgo Collaboration groups can be found on the project website.

KAGRA is the laser interferometer with 3-kilometer arm length in Kamioka, Gifu, Japan. The host institute is the Institute for Cosmic Ray Research of the University of Tokyo, and the project is co-hosted by the National Astronomical Observatory of Japan and the High Energy Accelerator Research Organization. The KAGRA collaboration is composed of more than 400 members from 128 institutes in 17 countries/regions. KAGRA’s information for general audiences is at the website gwcenter.icrr.u-tokyo.ac.jp/en/. Resources for researchers are accessible at gwwiki.icrr.u-tokyo.ac.jp/JGWwiki/KAGRA


Study explains how a rare gene variant contributes to Alzheimer’s disease

Lipid metabolism and cell membrane function can be disrupted in the neurons of people who carry rare variants of ABCA7.


A new study from MIT neuroscientists reveals how rare variants of a gene called ABCA7 may contribute to the development of Alzheimer’s in some of the people who carry it.

Dysfunctional versions of the ABCA7 gene, which are found in a very small proportion of the population, contribute strongly to Alzheimer’s risk. In the new study, the researchers discovered that these mutations can disrupt the metabolism of lipids that play an important role in cell membranes.

This disruption makes neurons hyperexcitable and leads them into a stressed state that can damage DNA and other cellular components. These effects, the researchers found, could be reversed by treating neurons with choline, an important building block precursor needed to make cell membranes.

“We found pretty strikingly that when we treated these cells with choline, a lot of the transcriptional defects were reversed. We also found that the hyperexcitability phenotype and elevated amyloid beta peptides that we observed in neurons that lost ABCA7 was reduced after treatment,” says Djuna von Maydell, an MIT graduate student and the lead author of the study.

Li-Huei Tsai, director of MIT’s Picower Institute for Learning and Memory and the Picower Professor in the MIT Department of Brain and Cognitive Sciences, is the senior author of the paper, which appears today in Nature.

Membrane dysfunction

Genomic studies of Alzheimer’s patients have found that people who carry variants of ABCA7 that generate reduced levels of functional ABCA7 protein have about double the odds of developing Alzheimer’s as people who don’t have those variants.

ABCA7 encodes a protein that transports lipids across cell membranes. Lipid metabolism is also the primary target of a more common Alzheimer’s risk factor known as APOE4. In previous work, Tsai’s lab has shown that APOE4, which is found in about half of all Alzheimer’s patients, disrupts brain cells’ ability to metabolize lipids and respond to stress.

To explore how ABCA7 variants might contribute to Alzheimer’s risk, the researchers obtained tissue samples from the Religious Orders Study/Memory and Aging Project (ROSMAP), a longitudinal study that has tracked memory, motor, and other age-related changes in older people since 1994. Of about 1,200 samples in the dataset that had genetic information available, the researchers obtained 12 from people who carried a rare variant of ABCA7.

The researchers performed single-cell RNA sequencing of neurons from these ABCA7 carriers, allowing them to determine which other genes are affected when ABCA7 is missing. They found that the most significantly affected genes fell into three clusters related to lipid metabolism, DNA damage, and oxidative phosphorylation (the metabolic process that cells use to capture energy as ATP).

To investigate how those alterations could affect neuron function, the researchers introduced ABCA7 variants into neurons derived from induced pluripotent stem cells.

These cells showed many of the same gene expression changes as the cells from the patient samples, especially among genes linked to oxidative phosphorylation. Further experiments showed that the “safety valve” that normally lets mitochondria limit excess build-up of electrical charge was less active. This can lead to oxidative stress, a state that occurs when too many cell-damaging free radicals build up in tissues.

Using these engineered cells, the researchers also analyzed the effects of ABCA7 variants on lipid metabolism. Cells with the variants altered metabolism of a molecule called phosphatidylcholine, which could lead to membrane stiffness and may explain why the mitochondrial membranes of the cells were unable to function normally.

A boost in choline

Those findings raised the possibility that intervening in phosphatidylcholine metabolism might reverse some of the cellular effects of ABCA7 loss. To test that idea, the researchers treated neurons with ABCA7 mutations with a molecule called CDP-choline, a precursor of phosphatidylcholine.

As these cells began producing new phosphatidylcholine (both saturated and unsaturated forms), their mitochondrial membrane potentials also returned to normal, and their oxidative stress levels went down.

The researchers then used induced pluripotent stem cells to generate 3D tissue organoids made of neurons with the ABCA7 variant. These organoids developed higher levels of amyloid beta proteins, which form the plaques seen in the brains of Alzheimer’s patients. However, those levels returned to normal when the organoids were treated with CDP-choline. The treatment also reduced neurons’ hyperexcitability.

In a 2021 paper, Tsai’s lab found that CDP-choline treatment could also reverse many of the effects of another Alzheimer’s-linked gene variant, APOE4, in mice. She is now working with researchers at the University of Texas and MD Anderson Cancer Center on a clinical trial exploring how choline supplements affect people who carry the APOE4 gene.

Choline is naturally found in foods such as eggs, meat, fish, and some beans and nuts. Boosting choline intake with supplements may offer a way for many people to reduce their risk of Alzheimer’s disease, Tsai says.

“From APOE4 to ABCA7 loss of function, my lab demonstrates that disruption of lipid homeostasis leads to the development of Alzheimer’s-related pathology, and that restoring lipid homeostasis, such as through choline supplementation, can ameliorate these pathological phenotypes,” she says.

In addition to the rare variants of ABCA7 that the researchers studied in this paper, there is also a more common variant that is found at a frequency of about 18 percent in the population. This variant was thought to be harmless, but the MIT team showed that cells with this variant exhibited many of the same gene alterations in lipid metabolism that they found in cells with the rare ABCA7 variants.

“There’s more work to be done in this direction, but this suggests that ABCA7 dysfunction might play an important role in a much larger part of the population than just people who carry the rare variants,” von Maydell says.

The research was funded, in part, by the Cure Alzheimer’s Fund, the Freedom Together Foundation, the Carol and Gene Ludwig Family Foundation, James D. Cook, and the National Institutes of Health.


“Bottlebrush” particles deliver big chemotherapy payloads directly to cancer cells

Outfitted with antibodies that guide them to the tumor site, the new nanoparticles could reduce the side effects of treatment.


Using tiny particles shaped like bottlebrushes, MIT chemists have found a way to deliver a large range of chemotherapy drugs directly to tumor cells.

To guide them to the right location, each particle contains an antibody that targets a specific tumor protein. This antibody is tethered to bottlebrush-shaped polymer chains carrying dozens or hundreds of drug molecules — a much larger payload than can be delivered by any existing antibody-drug conjugates.

In mouse models of breast and ovarian cancer, the researchers found that treatment with these conjugated particles could eliminate most tumors. In the future, the particles could be modified to target other types of cancer, by swapping in different antibodies.

“We are excited about the potential to open up a new landscape of payloads and payload combinations with this technology, that could ultimately provide more effective therapies for cancer patients,” says Jeremiah Johnson, the A. Thomas Geurtin Professor of Chemistry at MIT, a member of the Koch Institute for Integrative Cancer Research, and the senior author of the new study.

MIT postdoc Bin Liu is the lead author of the paper, which appears today in Nature Biotechnology.

A bigger drug payload

Antibody-drug conjugates (ADCs) are a promising type of cancer treatment that consist of a cancer-targeting antibody attached to a chemotherapy drug. At least 15 ADCs have been approved by the FDA to treat several different types of cancer.

This approach allows specific targeting of a cancer drug to a tumor, which helps to prevent some of the side effects that occur when chemotherapy drugs are given intravenously. However, one drawback to currently approved ADCs is that only a handful of drug molecules can be attached to each antibody. That means they can only be used with very potent drugs — usually DNA-damaging agents or drugs that interfere with cell division.

To try to use a broader range of drugs, which are often less potent, Johnson and his colleagues decided to adapt bottlebrush particles that they had previously invented. These particles consist of a polymer backbone that are attached to tens to hundreds of “prodrug” molecules — inactive drug molecules that are activated upon release within the body. This structure allows the particles to deliver a wide range of drug molecules, and the particles can be designed to carry multiple drugs in specific ratios.

Using a technique called click chemistry, the researchers showed that they could attach one, two, or three of their bottlebrush polymers to a single tumor-targeting antibody, creating an antibody-bottlebrush conjugate (ABC). This means that just one antibody can carry hundreds of prodrug molecules. The currently approved ADCs can carry a maximum of about eight drug molecules.

The huge number of payloads in the ABC particles allows the researchers to incorporate less potent cancer drugs such as doxorubicin or paclitaxel, which enhances the customizability of the particles and the variety of drug combinations that can be used.

“We can use antibody-bottlebrush conjugates to increase the drug loading, and in that case, we can use less potent drugs,” Liu says. “In the future, we can very easily copolymerize with multiple drugs together to achieve combination therapy.”

The prodrug molecules are attached to the polymer backbone by cleavable linkers. After the particles reach a tumor site, some of these linkers are broken right away, allowing the drugs to kill nearby cancer cells even if they don’t express the target antibody. Other particles are absorbed into cells with the target antibody before releasing their toxic payload.

Effective treatment

For this study, the researchers created ABC particles carrying a few different types of drugs: microtubule inhibitors called MMAE and paclitaxel, and two DNA-damaging agents, doxorubicin and SN-38. They also designed ABC particles carrying an experimental type of drug known as PROTAC (proteolysis-targeting chimera), which can selectively degrade disease-causing proteins inside cells.

Each bottlebrush was tethered to an antibody targeting either HER2, a protein often overexpressed in breast cancer, or MUC1, which is commonly found in ovarian, lung, and other types of cancer.

The researchers tested each of the ABCs in mouse models of breast or ovarian cancer and found that in most cases, the ABC particles were able to eradicate the tumors. This treatment was significantly more effective than giving the same bottlebrush prodrugs by injection, without being conjugated to a targeting antibody.

“We used a very low dose, almost 100 times lower compared to the traditional small-molecule drug, and the ABC still can achieve much better efficacy compared to the small-molecule drug given on its own,” Liu says.

These ABCs also performed better than two FDA-approved ADCs, T-DXd and TDM-1, which both use HER2 to target cells. T-DXd carries deruxtecan, which interferes with DNA replication, and TDM-1 carries emtansine, a microtubule inhibitor.

In future work, the MIT team plans to try delivering combinations of drugs that work by different mechanisms, which could enhance their overall effectiveness. Among these could be immunotherapy drugs such as STING activators.

The researchers are also working on swapping in different antibodies, such as antibodies targeting EGFR, which is widely expressed in many tumors. More than 100 antibodies have been approved to treat cancer and other diseases, and in theory any of those could be conjugated to cancer drugs to create a targeted therapy.

The research was funded in part by the National Institutes of Health, the Ludwig Center at MIT, and the Koch Institute Frontier Research Program. 


Remembering David Baltimore, influential biologist and founding director of the Whitehead Institute

The longtime MIT professor and Nobel laureate was a globally respected researcher, academic leader, and science policy visionary who guided the careers of generations of scientists.


The Whitehead Institute for Biomedical Research fondly remembers its founding director, David Baltimore, a former MIT Institute Professor and Nobel laureate who died Sept. 6 at age 87.

With discovery after discovery, Baltimore brought to light key features of biology with direct implications for human health. His work at MIT earned him a share of the 1975 Nobel Prize in Physiology or Medicine (along with Howard Temin and Renato Dulbecco) for discovering reverse transcriptase and identifying retroviruses, which use RNA to synthesize viral DNA.

Following the award, Baltimore reoriented his laboratory’s focus to pursue a mix of immunology and virology. Among the lab’s most significant subsequent discoveries were the identification of a pair of proteins that play an essential role in enabling the immune system to create antibodies for so many different molecules, and investigations into how certain viruses can cause cell transformation and cancer. Work from Baltimore’s lab also helped lead to the development of the important cancer drug Gleevec — the first small molecule to target an oncoprotein inside of cells.

In 1982, Baltimore partnered with philanthropist Edwin C. “Jack” Whitehead to conceive and launch the Whitehead Institute and then served as its founding director until 1990. Within a decade of its founding, the Baltimore-led Whitehead Institute was named the world’s top research institution in molecular biology and genetics.

“More than 40 years later, Whitehead Institute is thriving, still guided by the strategic vision that David Baltimore and Jack Whitehead articulated,” says Phillip Sharp, MIT Institute Professor Emeritus, former Whitehead board member, and fellow Nobel laureate. “Of all David’s myriad and significant contributions to science, his role in building the first independent biomedical research institute associated with MIT and guiding it to extraordinary success may well prove to have had the broadest and longest-term impact.” 

Ruth Lehmann, director and president of the Whitehead Institute, and professor of biology at MIT, says: “I, like many others, owe my career to David Baltimore. He recruited me to Whitehead Institute and MIT in 1988 as a faculty member, taking a risk on an unproven, freshly-minted PhD graduate from Germany. As director, David was incredibly skilled at bringing together talented scientists at different stages of their careers and facilitating their collaboration so that the whole would be greater than the sum of its parts. This approach remains a core strength of Whitehead Institute.”

As part of the Whitehead Institute’s mission to cultivate the next generation of scientific leaders, Baltimore founded the Whitehead Fellows program, which provides extraordinarily talented recent PhD and MD graduates with the opportunity to launch their own labs, rather than to go into traditional postdoctoral positions. The program has been a huge success, with former fellows going on to excel as leaders in research, education, and industry.

David Page, MIT professor of biology, Whitehead Institute member, and former director who was the Whitehead's first fellow, recalls, “David was both an amazing scientist and a peerless leader of aspiring scientists. The launching of the Whitehead Fellows program reflected his recipe for institutional success: gather up the resources to allow young scientists to realize their dreams, recruit with an eye toward potential for outsized impact, and quietly mentor and support without taking credit for others’ successes — all while treating junior colleagues as equals. It is a beautiful strategy that David designed and executed magnificently.”

Sally Kornbluth, president of MIT and a member of the Whitehead Institute Board of Directors, says that “David was a scientific hero for so many. He was one of those remarkable individuals who could make stellar scientific breakthroughs and lead major institutions with extreme thoughtfulness and grace. He will be missed by the whole scientific community.”

“David was a wise giant. He was brilliant. He was an extraordinarily effective, ethical leader and institution builder who influenced and inspired generations of scientists and premier institutions,” says Susan Whitehead, member of the board of directors and daughter of Jack Whitehead.

Gerald R. Fink, the Margaret and Herman Sokol Professor Emeritus at MIT who was recruited by Baltimore from Cornell University as one of four founding members of the Whitehead Institute, and who succeeded him as director in 1990, observes: “David became my hero and friend. He upheld the highest scientific ideals and instilled trust and admiration in all around him.”

     David Baltimore - Infinite History (2010)
     Video: MIT | Watch with transcript

Baltimore was born in New York City in 1938. His scientific career began at Swarthmore College, where he earned a bachelor’s degree with high honors in chemistry in 1960. He then began doctoral studies in biophysics at MIT, but in 1961 shifted his focus to animal viruses and moved to what is now the Rockefeller University, where he did his thesis work in the lab of Richard Franklin.

After completing postdoctoral fellowships with James Darnell at MIT and Jerard Hurwitz at the Albert Einstein College of Medicine, Baltimore launched his own lab at the Salk Institute for Biological Studies from 1965 to 1968. Then, in 1968, he returned to MIT as a member of its biology faculty, where he remained until 1990. (Whitehead Institute’s members hold parallel appointments as faculty in the MIT Department of Biology.)

In 1990, Baltimore left the Whitehead Institute and MIT to become the president of Rockefeller University. He returned to MIT from 1994 to 1997, serving as an Institute Professor, after which he was named president of Caltech. Baltimore held that position until 2006, when he was elected to a three-year term as president of the American Association for the Advancement of Science.

For decades, Baltimore has been viewed not just as a brilliant scientist and talented academic leader, but also as a wise counsel to the scientific community. For example, he helped organize the 1975 Asilomar Conference on Recombinant DNA, which created stringent safety guidelines for the study and use of recombinant DNA technology. He played a leadership role in the development of policies on AIDS research and treatment, and on genomic editing. Serving as an advisor to both organizations and individual scientists, he helped to shape the strategic direction of dozens of institutions and to advance the careers of generations of researchers. As Founding Member Robert Weinberg summarizes it, “He had no tolerance for nonsense and weak science.”  

In 2023, the Whitehead Institute established the endowed David Baltimore Chair in Biomedical Research, honoring Baltimore’s six decades of scientific, academic, and policy leadership and his impact on advancing innovative basic biomedical research.

“David was a visionary leader in science and the institutions that sustain it. He devoted his career to advancing scientific knowledge and strengthening the communities that make discovery possible, and his leadership of Whitehead Institute exemplified this,” says Richard Young, MIT professor of biology and Whitehead Institute member. “David approached life with keen observation, boundless curiosity, and a gift for insight that made him both a brilliant scientist and a delightful companion. His commitment to mentoring and supporting young scientists left a lasting legacy, inspiring the next generation to pursue impactful contributions to biomedical research. Many of us found in him not only a mentor and role model, but also a steadfast friend whose presence enriched our lives and whose absence will be profoundly felt.”


Alzheimer’s erodes brain cells’ control of gene expression, undermining function, cognition

Study of 3.5 million cells from more than 100 human brains finds Alzheimer’s progression — and resilience to disease — depends on preserving epigenomic stability.


Most people recognize Alzheimer’s disease from its devastating symptoms such as memory loss, while new drugs target pathological aspects of disease manifestations, such as plaques of amyloid proteins. Now, a sweeping new open-access study in the Sept. 4 edition of Cell by MIT researchers shows the importance of understanding the disease as a battle over how well brain cells control the expression of their genes. The study paints a high-resolution picture of a desperate struggle to maintain healthy gene expression and gene regulation, where the consequences of failure or success are nothing less than the loss or preservation of cell function and cognition.

The study presents a first-of-its-kind, multimodal atlas of combined gene expression and gene regulation spanning 3.5 million cells from six brain regions, obtained by profiling 384 post-mortem brain samples across 111 donors. The researchers profiled both the “transcriptome,” showing which genes are expressed into RNA, and the “epigenome,” the set of chromosomal modifications that establish which DNA regions are accessible and thus utilized between different cell types.

The resulting atlas revealed many insights showing that the progression of Alzheimer’s is characterized by two major epigenomic trends. The first is that vulnerable cells in key brain regions suffer a breakdown of the rigorous nuclear “compartments” they normally maintain to ensure some parts of the genome are open for expression but others remain locked away. The second major finding is that susceptible cells experience a loss of “epigenomic information,” meaning they lose their grip on the unique pattern of gene regulation and expression that gives them their specific identity and enables their healthy function.

Accompanying the evidence of compromised compartmentalization and the erosion of epigenomic information are many specific findings pinpointing molecular circuitry that breaks down by cell type, by region, and gene network. They found, for instance, that when epigenomic conditions deteriorate, that opens the door to expression of many genes associated with disease, whereas if cells manage to keep their epigenomic house in order, they can keep disease-associated genes in check. Moreover, the researchers clearly saw that when the epigenomic breakdowns were occurring people lost cognitive ability, but where epigenomic stability remained, so did cognition.

“To understand the circuitry, the logic responsible for gene expression changes in Alzheimer’s disease [AD], we needed to understand the regulation and upstream control of all the changes that are happening, and that’s where the epigenome comes in,” says senior author Manolis Kellis, a professor in the Computer Science and Artificial Intelligence Lab and head of MIT’s Computational Biology Group. “This is the first large-scale, single-cell, multi-region gene-regulatory atlas of AD, systematically dissecting the dynamics of epigenomic and transcriptomic programs across disease progression and resilience.”

By providing that detailed examination of the epigenomic mechanisms of Alzheimer’s progression, the study provides a blueprint for devising new Alzheimer’s treatments that can target factors underlying the broad erosion of epigenomic control or the specific manifestations that affect key cell types such as neurons and supporting glial cells.

“The key to developing new and more effective treatments for Alzheimer’s disease depends on deepening our understanding of the mechanisms that contribute to the breakdowns of cellular and network function in the brain,” says Picower Professor and co-corresponding author Li-Huei Tsai, director of The Picower Institute for Learning and Memory and a founding member of MIT’s Aging Brain Initiative, along with Kellis. “This new data advances our understanding of how epigenomic factors drive disease.”

Kellis Lab members Zunpeng Liu and Shanshan Zhang are the study’s co-lead authors.

Compromised compartments and eroded information

Among the post-mortem brain samples in the study, 57 came from donors to the Religious Orders Study or the Rush Memory and Aging Project (collectively known as “ROSMAP”) who did not have AD pathology or symptoms, while 33 came from donors with early-stage pathology and 21 came from donors at a late stage. The samples therefore provided rich information about the symptoms and pathology each donor was experiencing before death.

In the new study, Liu and Zhang combined analyses of single-cell RNA sequencing of the samples, which measures which genes are being expressed in each cell, and ATACseq, which measures whether chromosomal regions are accessible for gene expression. Considered together, these transcriptomic and epigenomic measures enabled the researchers to understand the molecular details of how gene expression is regulated across seven broad classes of brain cells (e.g., neurons or other glial cell types) and 67 subtypes of cell types (e.g., 17 kinds of excitatory neurons or six kinds of inhibitory ones).

The researchers annotated more than 1 million gene-regulatory control regions that different cells employ to establish their specific identities and functionality using epigenomic marking. Then, by comparing the cells from Alzheimer’s brains to the ones without, and accounting for stage of pathology and cognitive symptoms, they could produce rigorous associations between the erosion of these epigenomic markings, and ultimately loss of function.

For instance, they saw that among people who advanced to late-stage AD, normally repressive compartments opened up for more expression and compartments that were normally more open during health became more repressed. Worryingly, when the normally repressive compartments of brain cells opened up, they became more afflicted with disease.

“For Alzheimer’s patients, repressive compartments opened up, and gene expression levels increased, which was associated with decreased cognitive function,” explains Liu.

But when cells managed to keep their compartments in order such that they expressed the genes they were supposed to, people remained cognitively intact.

Meanwhile, based on the cells’ expression of their regulatory elements, the researchers created an epigenomic information score for each cell. Generally, information declined as pathology progressed, but that was particularly notable among cells in the two brain regions affected earliest in Alzheimer’s: the entorhinal cortex and the hippocampus. The analyses also highlighted specific cell types that were especially vulnerable including microglia that play immune and other roles, oligodendrocytes that produce myelin insulation for neurons, and particular kinds of excitatory neurons.

Risk genes and “chromatin guardians”

Detailed analyses in the paper highlighted how epigenomic regulation tracked with disease-related problems, Liu notes. The e4 variant of the APOE gene, for instance, is widely understood to be the single biggest genetic risk factor for Alzheimer’s. In APOE4 brains, microglia initially responded to the emerging disease pathology with an increase in their epigenomic information, suggesting that they were stepping up to their unique responsibility to fight off disease. But as the disease progressed, the cells exhibited a sharp drop off in information, a sign of deterioration and degeneration. This turnabout was strongest in people who had two copies of APOE4, rather than just one. The findings, Kellis said, suggest that APOE4 might destabilize the genome of microglia, causing them to burn out.

Another example is the fate of neurons expressing the gene RELN and its protein Reelin. Prior studies, including by Kellis and Tsai, have shown that RELN- expressing neurons in the entorhinal cortex and hippocampus are especially vulnerable in Alzheimer’s, but promote resilience if they survive. The new study sheds new light on their fate by demonstrating that they exhibit early and severe epigenomic information loss as disease advances, but that in people who remained cognitively resilient the neurons maintained epigenomic information.

In yet another example, the researchers tracked what they colloquially call “chromatin guardians” because their expression sustains and regulates cells’ epigenomic programs. For instance, cells with greater epigenomic erosion and advanced AD progression displayed increased chromatin accessibility in areas that were supposed to be locked down by Polycomb repression genes or other gene expression silencers. While resilient cells expressed genes promoting neural connectivity, epigenomically eroded cells expressed genes linked to inflammation and oxidative stress.

“The message is clear: Alzheimer’s is not only about plaques and tangles, but about the erosion of nuclear order itself,” Kellis says. “Cognitive decline emerges when chromatin guardians lose ground to the forces of erosion, switching from resilience to vulnerability at the most fundamental level of genome regulation.

“And when our brain cells lose their epigenomic memory marks and epigenomic information at the lowest level deep inside our neurons and microglia, it seems that Alheimer’s patients also lose their memory and cognition at the highest level.”

Other authors of the paper are Benjamin T. James, Kyriaki Galani, Riley J. Mangan, Stuart Benjamin Fass, Chuqian Liang, Manoj M. Wagle, Carles A. Boix, Yosuke Tanigawa, Sukwon Yun, Yena Sung, Xushen Xiong, Na Sun, Lei Hou, Martin Wohlwend, Mufan Qiu, Xikun Han, Lei Xiong, Efthalia Preka, Lei Huang, William F. Li, Li-Lun Ho, Amy Grayson, Julio Mantero, Alexey Kozlenkov, Hansruedi Mathys, Tianlong Chen, Stella Dracheva, and David A. Bennett.

Funding for the research came from the National Institutes of Health, the National Science Foundation, the Cure Alzheimer’s Fund, the Freedom Together Foundation, the Robert A. and Renee E. Belfer Family Foundation, Eduardo Eurnekian, and Joseph P. DiSabato.


Physicists devise an idea for lasers that shoot beams of neutrinos

Super-cooling radioactive atoms could produce a laser-like neutrino beam, offering a new way to study these ghostly particles — and possibly a new form of communication.


At any given moment, trillions of particles called neutrinos are streaming through our bodies and every material in our surroundings, without noticeable effect. Smaller than electrons and lighter than photons, these ghostly entities are the most abundant particles with mass in the universe.

The exact mass of a neutrino is a big unknown. The particle is so small, and interacts so rarely with matter, that it is incredibly difficult to measure. Scientists attempt to do so by harnessing nuclear reactors and massive particle accelerators to generate unstable atoms, which then decay into various byproducts including neutrinos. In this way, physicists can manufacture beams of neutrinos that they can probe for properties including the particle’s mass.

Now MIT physicists propose a much more compact and efficient way to generate neutrinos that could be realized in a tabletop experiment.

In a paper appearing in Physical Review Letters, the physicists introduce the concept for a “neutrino laser” — a burst of neutrinos that could be produced by laser-cooling a gas of radioactive atoms down to temperatures colder than interstellar space. At such frigid temps, the team predicts the atoms should behave as one quantum entity, and radioactively decay in sync.

The decay of radioactive atoms naturally releases neutrinos, and the physicists say that in a coherent, quantum state this decay should accelerate, along with the production of neutrinos. This quantum effect should produce an amplified beam of neutrinos, broadly similar to how photons are amplified to produce conventional laser light.

“In our concept for a neutrino laser, the neutrinos would be emitted at a much faster rate than they normally would, sort of like a laser emits photons very fast,” says study co-author Ben Jones PhD ’15, an associate professor of physics at the University of Texas at Arlington.

As an example, the team calculated that such a neutrino laser could be realized by trapping 1 million atoms of rubidium-83. Normally, the radioactive atoms have a half-life of about 82 days, meaning that half the atoms decay, shedding an equivalent number of neutrinos, every 82 days. The physicists show that, by cooling rubidium-83 to a coherent, quantum state, the atoms should undergo radioactive decay in mere minutes.

“This is a novel way to accelerate radioactive decay and the production of neutrinos, which to my knowledge, has never been done,” says co-author Joseph Formaggio, professor of physics at MIT.

The team hopes to build a small tabletop demonstration to test their idea. If it works, they envision a neutrino laser could be used as a new form of communication, by which the particles could be sent directly through the Earth to underground stations and habitats. The neutrino laser could also be an efficient source of radioisotopes, which, along with neutrinos, are byproducts of radioactive decay. Such radioisotopes could be used to enhance medical imaging and cancer diagnostics.

Coherent condensate

For every atom in the universe, there are about a billion neutrinos. A large fraction of these invisible particles may have formed in the first moments following the Big Bang, and they persist in what physicists call the “cosmic neutrino background.” Neutrinos are also produced whenever atomic nuclei fuse together or break apart, such as in the fusion reactions in the sun’s core, and in the normal decay of radioactive materials.

Several years ago, Formaggio and Jones separately considered a novel possibility: What if a natural process of neutrino production could be enhanced through quantum coherence? Initial explorations revealed fundamental roadblocks in realizing this. Years later, while discussing the properties of ultracold tritium (an unstable isotope of hydrogen that undergoes radioactive decay) they asked: Could the production of neutrinos be enhanced if radioactive atoms such as tritium could be made so cold that they could be brought into a quantum state known as a Bose-Einstein condensate?

A Bose-Einstein condensate, or BEC, is a state of matter that forms when a gas of certain particles is cooled down to near absolute zero. At this point, the particles are brought down to their lowest energy level and stop moving as individuals. In this deep freeze, the particles can start to “feel” each others’ quantum effects, and can act as one coherent entity — a unique phase that can result in exotic physics.

BECs have been realized in a number of atomic species. (One of the first instances was with sodium atoms, by MIT’s Wolfgang Ketterle, who shared the 2001 Nobel Prize in Physics for the result.) However, no one has made a BEC from radioactive atoms. To do so would be exceptionally challenging, as most radioisotopes have short half-lives and would decay entirely before they could be sufficiently cooled to form a BEC.

Nevertheless, Formaggio wondered, if radioactive atoms could be made into a BEC, would this enhance the production of neutrinos in some way? In trying to work out the quantum mechanical calculations, he found initially that no such effect was likely.

“It turned out to be a red herring — we can’t accelerate the process of radioactive decay, and neutrino production, just by making a Bose-Einstein condensate,” Formaggio says.

In sync with optics

Several years later, Jones revisited the idea, with an added ingredient: superradiance — a phenomenon of quantum optics that occurs when a collection of light-emitting atoms is stimulated to behave in sync. In this coherent phase, it’s predicted that the atoms should emit a burst of photons that is “superradiant,” or more radiant than when the atoms are normally out of sync.

Jones proposed to Formaggio that perhaps a similar superradiant effect is possible in a radioactive Bose-Einstein condensate, which could then result in a similar burst of neutrinos. The physicists went to the drawing board to work out the equations of quantum mechanics governing how light-emitting atoms morph from a coherent starting state into a superradiant state. They used the same equations to work out what radioactive atoms in a coherent BEC state would do.

“The outcome is: You get a lot more photons more quickly, and when you apply the same rules to something that gives you neutrinos, it will give you a whole bunch more neutrinos more quickly,” Formaggio explains. “That’s when the pieces clicked together, that superradiance in a radioactive condensate could enable this accelerated, laser-like neutrino emission.”

To test their concept in theory, the team calculated how neutrinos would be produced from a cloud of 1 million super-cooled rubidium-83 atoms. They found that, in the coherent BEC state, the atoms radioactively decayed at an accelerating rate, releasing a laser-like beam of neutrinos within minutes.

Now that the physicists have shown in theory that a neutrino laser is possible, they plan to test the idea with a small tabletop setup.

“It should be enough to take this radioactive material, vaporize it, trap it with lasers, cool it down, and then turn it into a Bose-Einstein condensate,” Jones says. “Then it should start doing this superradiance spontaneously.”

The pair acknowledge that such an experiment will require a number of precautions and careful manipulation.

“If it turns out that we can show it in the lab, then people can think about: Can we use this as a neutrino detector? Or a new form of communication?” Formaggio says. “That’s when the fun really starts.”


Study finds exoplanet TRAPPIST-1e is unlikely to have a Venus- or Mars-like atmosphere

Astronomers led by EAPS postdoc Ana Glidden ruled out several atmospheric scenarios for the planet, narrowing ideas of what habitability there might look like.


In the search for habitable exoplanets, atmospheric conditions play a key role in determining if a planet can sustain liquid water. Suitable candidates often sit in the “Goldilocks zone,” a distance that is neither too close nor too far from their host star to allow liquid water. With the launch of the James Webb Space Telescope (JWST), astronomers are collecting improved observations of exoplanet atmospheres that will help determine which exoplanets are good candidates for further study.

In an open-access paper published today in The Astrophysical Journal Lettersastronomers used JWST to take a closer look at the atmosphere of the exoplanet TRAPPIST-1e, located in the TRAPPIST-1 system. While they haven’t found definitive proof of what it is made of — or if it even has an atmosphere — they were able to rule out several possibilities.

“The idea is: If we assume that the planet is not airless, can we constrain different atmospheric scenarios? Do those scenarios still allow for liquid water at the surface?” says Ana Glidden, a postdoc in the MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS) and the MIT Kavli Institute for Astrophysics and Space Research, and the first author on the paper. The answers they found were yes.

The new data rule out a hydrogen-dominated atmosphere, and place tighter constraints on other atmospheric conditions that are commonly created through secondary-generation, such as volcanic eruptions and outgassing from the planet’s interior. The data were consistent enough to still allow for the possibility of a surface ocean.

“TRAPPIST-1e remains one of our most compelling habitable-zone planets, and these new results take us a step closer to knowing what kind of world it is,” says Sara Seager, Class of 1941 Professor of Planetary Science at MIT and co-author on the study. “The evidence pointing away from Venus- and Mars-like atmospheres sharpens our focus on the scenarios still in play.”

The study’s co-authors also include collaborators from the University of Arizona, Johns Hopkins University, University of Michigan, the Space Telescope Science Institute, and members of the JWST-TST DREAMS Team.

Improved observations

Exoplanet atmospheres are studied using a technique called transmission spectroscopy. When a planet passes in front of its host star, the starlight is filtered through the planet’s atmosphere. Astronomers can determine which molecules are present in the atmosphere by seeing how the light changes at different wavelengths.

“Each molecule has a spectral fingerprint. You can compare your observations with those fingerprints to suss out which molecules may be present,” says Glidden.

JWST has a larger wavelength coverage and higher spectral resolution than its predecessor, the Hubble Space Telescope, which makes it possible to observe molecules like carbon dioxide and methane that are more commonly found in our own solar system. However, the improved observations have also highlighted the problem of stellar contamination, where changes in the host star’s temperature due to things like sunspots and solar flares make it difficult to interpret data.

“Stellar activity strongly interferes with the planetary interpretation of the data because we can only observe a potential atmosphere through starlight,” says Glidden. “It is challenging to separate out which signals come from the star versus from the planet itself.”

Ruling out atmospheric conditions

The researchers used a novel approach to mitigate for stellar activity and, as a result, “any signal you can see varying visit-to-visit is most likely from the star, while anything that’s consistent between the visits is most likely the planet,” says Glidden.

The researchers were then able to compare the results to several different possible atmospheric scenarios. They found that carbon dioxide-rich atmospheres, like those of Mars and Venus, are unlikely, while a warm, nitrogen-rich atmosphere similar to Saturn’s moon Titan remains possible. The evidence, however, is too weak to determine if any atmosphere was present, let alone detecting a specific type of gas. Additional, ongoing observations that are already in the works will help to narrow down the possibilities.

“With our initial observations, we have showcased the gains made with JWST. Our follow-up program will help us to further refine our understanding of one of our best habitable-zone planets,” says Glidden.


A comprehensive cellular-resolution map of brain activity

An international collaboration of neuroscientists, including MIT Professor Ila Fiete, developed a brain-wide map of decision-making at cellular resolution in mice.


The first comprehensive map of mouse brain activity has been unveiled by a large international collaboration of neuroscientists. 

Researchers from the International Brain Laboratory (IBL), including MIT neuroscientist Ila Fiete, published their open-access findings today in two papers in Nature, revealing insights into how decision-making unfolds across the entire brain in mice at single-cell resolution. This brain-wide activity map challenges the traditional hierarchical view of information processing in the brain and shows that decision-making is distributed across many regions in a highly coordinated way.

“This is the first time anyone has produced a full, brain-wide map of the activity of single neurons during decision-making,” explains co-founder of IBL Alexandre Pouget. “The scale is unprecedented as we recorded from over half-a-million neurons across mice in 12 labs, covering 279 brain areas, which together represent 95 percent of the mouse brain volume. The decision-making activity, and particularly reward, lit up the brain like a Christmas tree,” adds Pouget, who is also a group leader at the University of Geneva in Switzerland.

Modeling decision-making

The brain map was made possible by a major international collaboration of neuroscientists from multiple universities, including MIT. Researchers across 12 labs used state-of-the-art silicon electrodes, called neuropixels probes, for simultaneous neural recordings to measure brain activity while mice were carrying out a decision-making task.

“Participating in the International Brain Laboratory has added new ways for our group to contribute to science,” says Fiete, who is also a professor of brain and cognitive sciences, an associate investigator at the McGovern Institute for Brain Research, and director of the K. Lisa Yang ICoN Center at MIT. “Our lab has helped standardize methods to analyze and generate robust conclusions from data. As computational neuroscientists interested in building models of how the brain works, access to brain-wide recordings is incredible: the traditional approach of recording from one or a few brain areas limited our ability to build and test theories, resulting in fragmented models. Now, we have the delightful but formidable task to make sense of how all parts of the brain coordinate to perform a behavior. Surprisingly, having a full view of the brain leads to simplifications in the models of decision-making,” says Fiete.

The labs collected data from mice performing a decision-making task with sensory, motor, and cognitive components. In the task, a mouse sits in front of a screen and a light appears on the left or right side. If the mouse then responds by moving a small wheel in the correct direction, it receives a reward.

In some trials, the light is so faint that the animal must guess which way to turn the wheel, for which it can use prior knowledge: the light tends to appear more frequently on one side for a number of trials, before the high-frequency side switches. Well-trained mice learn to use this information to help them make correct guesses. These challenging trials therefore allowed the researchers to study how prior expectations influence perception and decision-making.

Brain-wide results

The first paper, “A brain-wide map of neural activity during complex behaviour,” showed that decision-making signals are surprisingly distributed across the brain, not localized to specific regions. This adds brain-wide evidence to a growing number of studies that challenge the traditional hierarchical model of brain function, and emphasizes that there is constant communication across brain areas during decision-making, movement onset, and even reward. This means that neuroscientists will need to take a more holistic, brain-wide approach when studying complex behaviors in the future.

“The unprecedented breadth of our recordings pulls back the curtain on how the entire brain performs the whole arc of sensory processing, cognitive decision-making, and movement generation,” says Fiete. “Structuring a collaboration that collects a large standardized dataset which single labs could not assemble is a revolutionary new direction for systems neuroscience, initiating the field into the hyper-collaborative mode that has contributed to leaps forward in particle physics and human genetics. Beyond our own conclusions, the dataset and associated technologies, which were released much earlier as part of the IBL mission, have already become a massively used resource for the entire neuroscience community.”

The second paper, “Brain-wide representations of prior information,” showed that prior expectations — our beliefs about what is likely to happen based on our recent experience — are encoded throughout the brain. Surprisingly, these expectations are not only found in cognitive areas, but also brain areas that process sensory information and control actions. For example, expectations are even encoded in early sensory areas such as the thalamus, the brain’s first relay for visual input from the eye. This supports the view that the brain acts as a prediction machine, but with expectations encoded across multiple brain structures playing a central role in guiding behavior responses. These findings could have implications for understanding conditions such as schizophrenia and autism, which are thought to be caused by differences in the way expectations are updated in the brain.

“Much remains to be unpacked: If it is possible to find a signal in a brain area, does it mean that this area is generating the signal, or simply reflecting a signal generated somewhere else? How strongly is our perception of the world shaped by our expectations? Now we can generate some quantitative answers and begin the next phase experiments to learn about the origins of the expectation signals by intervening to modulate their activity,” says Fiete.

Looking ahead, the team at IBL plan to expand beyond their initial focus on decision-making to explore a broader range of neuroscience questions. With renewed funding in hand, IBL aims to expand its research scope and continue to support large-scale, standardized experiments.

New model of collaborative neuroscience

Officially launched in 2017, IBL introduced a new model of collaboration in neuroscience that uses a standardized set of tools and data processing pipelines shared across multiple labs, enabling the collection of massive datasets while ensuring data alignment and reproducibility. This approach to democratize and accelerate science draws inspiration from large-scale collaborations in physics and biology, such as CERN and the Human Genome Project.

All data from these studies, along with detailed specifications of the tools and protocols used for data collection, are openly accessible to the global scientific community for further analysis and research. Summaries of these resources can be viewed and downloaded on the IBL website under the sections: Data, Tools, Protocols.

This research was supported by grants from Wellcome, the Simons Foundation, the National Institutes of Health, the National Science Foundation, the Gatsby Charitable Foundation, and by the Max Planck Society and the Humboldt Foundation.


New gift expands mental illness studies at Poitras Center for Psychiatric Disorders Research

A commitment from longtime supporters Patricia and James Poitras ’63 initiates multidisciplinary efforts to understand and treat complex psychiatric disorders.


One in every eight people — 970 million globally — live with mental illness, according to the World Health Organization, with depression and anxiety being the most common mental health conditions worldwide. Existing therapies for complex psychiatric disorders like depression, anxiety, and schizophrenia have limitations, and federal funding to address these shortcomings is growing increasingly uncertain.

Patricia and James Poitras ’63 have committed $8 million to the Poitras Center for Psychiatric Disorders Research to launch pioneering research initiatives aimed at uncovering the brain basis of major mental illness and accelerating the development of novel treatments.

“Federal funding rarely supports the kind of bold, early-stage research that has the potential to transform our understanding of psychiatric illness. Pat and I want to help fill that gap — giving researchers the freedom to follow their most promising leads, even when the path forward isn’t guaranteed,” says James Poitras, who is chair of the McGovern Institute for Brain Research board.

Their latest gift builds upon their legacy of philanthropic support for psychiatric disorders research at MIT, which now exceeds $46 million.

“With deep gratitude for Jim and Pat’s visionary support, we are eager to launch a bold set of studies aimed at unraveling the neural and cognitive underpinnings of major mental illnesses,” says Professor Robert Desimone, director of the McGovern Institute, home to the Poitras Center. “Together, these projects represent a powerful step toward transforming how we understand and treat mental illness.”

A legacy of support

Soon after joining the McGovern Institute Leadership Board in 2006, the Poitrases made a $20 million commitment to establish the Poitras Center for Psychiatric Disorders Research at MIT. The center’s goal, to improve human health by addressing the root causes of complex psychiatric disorders, is deeply personal to them both.

“We had decided many years ago that our philanthropic efforts would be directed towards psychiatric research. We could not have imagined then that this perfect synergy between research at MIT’s McGovern Institute and our own philanthropic goals would develop,” recalls Patricia. 

The center supports research at the McGovern Institute and collaborative projects with institutions such as the Broad Institute of MIT and Harvard, McLean Hospital, Mass General Brigham, and other clinical research centers. Since its establishment in 2007, the center has enabled advances in psychiatric research including the development of a machine learning “risk calculator” for bipolar disorder, the use of brain imaging to predict treatment outcomes for anxiety, and studies demonstrating that mindfulness can improve mental health in adolescents.

For the past decade, the Poitrases have also fueled breakthroughs in the lab of McGovern investigator and MIT Professor Feng Zhang, backing the invention of powerful CRISPR systems and other molecular tools that are transforming biology and medicine. Their support has enabled the Zhang team to engineer new delivery vehicles for gene therapy, including vehicles capable of carrying genetic payloads that were once out of reach. The lab has also advanced innovative RNA-guided gene engineering tools such as NovaIscB, published in Nature Biotechnology in May 2025. These revolutionary genome editing and delivery technologies hold promise for the next generation of therapies needed for serious psychiatric illness.

In addition to fueling research in the center, the Poitras family has gifted two endowed professorships — the James and Patricia Poitras Professor of Neuroscience at MIT, currently held by Feng Zhang, and the James W. (1963) and Patricia T. Poitras Professor of Brain and Cognitive Sciences at MIT, held by Guoping Feng — and an annual postdoctoral fellowship at the McGovern Institute.

New initiatives at the Poitras Center

The Poitras family’s latest commitment to the Poitras Center will launch an ambitious set of new projects that bring together neuroscientists, clinicians, and computational experts to probe underpinnings of complex psychiatric disorders including schizophrenia, anxiety, and depression. These efforts reflect the center’s core mission: to speed scientific discovery and therapeutic innovation in the field of psychiatric brain disorders research.

McGovern cognitive neuroscientists Evelina Fedorenko PhD ’07, an associate professor, and Nancy Kanwisher ’80, PhD ’86, the Walter A. Rosenblith Professor of Cognitive Neuroscience — in collaboration with psychiatrist Ann Shinn of McLean Hospital — will explore how altered inner speech and reasoning contribute to the symptoms of schizophrenia. They will collect functional MRI data from individuals diagnosed with schizophrenia and matched controls as they perform reasoning tasks. The goal is to identify the brain activity patterns that underlie impaired reasoning in schizophrenia, a core cognitive disruption in the disorder.

A complementary line of investigation will focus on the role of inner speech — the “voice in our head” that shapes thought and self-awareness. The team will conduct a large-scale online behavioral study of neurotypical individuals to analyze how inner speech characteristics correlate with schizophrenia-spectrum traits. This will be followed by neuroimaging work comparing brain architecture among individuals with strong or weak inner voices and people with schizophrenia, with the aim of discovering neural markers linked to self-talk and disrupted cognition.

A different project led by McGovern neuroscientist and MIT Associate Professor Mark Harnett and 2024–2026 Poitras Center Postdoctoral Fellow Cynthia Rais focuses on how ketamine — an increasingly used antidepressant — alters brain circuits to produce rapid and sustained improvements in mood. Despite its clinical success, ketamine’s mechanisms of action remain poorly understood. The Harnett lab is using sophisticated tools to track how ketamine affects synaptic communication and large-scale brain network dynamics, particularly in models of treatment-resistant depression. By mapping these changes at both the cellular and systems levels, the team hopes to reveal how ketamine lifts mood so quickly — and inform the development of safer, longer-lasting antidepressants.

Guoping Feng is leveraging a new animal model of depression to uncover the brain circuits that drive major depressive disorder. The new animal model provides a powerful system for studying the intricacies of mood regulation. Feng’s team is using state-of-the-art molecular tools to identify the specific genes and cell types involved in this circuit, with the goal of developing targeted treatments that can fine-tune these emotional pathways.

“This is one of the most promising models we have for understanding depression at a mechanistic level,” says Feng, who is also associate director of the McGovern Institute. “It gives us a clear target for future therapies.”

Another novel approach to treating mood disorders comes from the lab of James DiCarlo, the Peter de Florez Professor of Neuroscience at MIT, who is exploring the brain’s visual-emotional interface as a therapeutic tool for anxiety. The amygdala, a key emotional center in the brain, is heavily influenced by visual input. DiCarlo’s lab is using advanced computational models to design visual scenes that may subtly shift emotional processing in the brain — essentially using sight to regulate mood. Unlike traditional therapies, this strategy could offer a noninvasive, drug-free option for individuals suffering from anxiety.

Together, these projects exemplify the kind of interdisciplinary, high-impact research that the Poitras Center was established to support.

“Mental illness affects not just individuals, but entire families who often struggle in silence and uncertainty,” adds Patricia Poitras. “Our hope is that Poitras Center scientists will continue to make important advancements and spark novel treatments for complex mental health disorders and, most of all, give families living with these conditions a renewed sense of hope for the future.”


New particle detector passes the “standard candle” test

The sPHENIX detector is on track to reveal properties of primordial quark-gluon plasma.


A new and powerful particle detector just passed a critical test in its goal to decipher the ingredients of the early universe.

The sPHENIX detector is the newest experiment at Brookhaven National Laboratory’s Relativistic Heavy Ion Collider (RHIC) and is designed to precisely measure products of high-speed particle collisions. From the aftermath, scientists hope to reconstruct the properties of quark-gluon plasma (QGP) — a white-hot soup of subatomic particles known as quarks and gluons that is thought to have sprung into existence in the few microseconds following the Big Bang. Just as quickly, the mysterious plasma disappeared, cooling and combining to form the protons and neutrons that make up today’s ordinary matter.

Now, the sPHENIX detector has made a key measurement that proves it has the precision to help piece together the primordial properties of quark-gluon plasma.

In a paper in the Journal of High Energy Physics, scientists including physicists at MIT report that sPHENIX precisely measured the number and energy of particles that streamed out from gold ions that collided at close to the speed of light.

Straight ahead

This test is considered in physics to be a “standard candle,” meaning that the measurement is a well-established constant that can be used to gauge a detector’s precision.

In particular, sPHENIX successfully measured the number of charged particles that are produced when two gold ions collide, and determined how this number changes when the ions collide head-on, versus just glancing by. The detector’s measurements revealed that head-on collisions produced 10 times more charged particles, which were also 10 times more energetic, compared to less straight-on collisions.

“This indicates the detector works as it should,” says Gunther Roland, professor of physics at MIT, who is a member and former spokesperson for the sPHENIX Collaboration. “It’s as if you sent a new telescope up in space after you’ve spent 10 years building it, and it snaps the first picture. It’s not necessarily a picture of something completely new, but it proves that it’s now ready to start doing new science.”

“With this strong foundation, sPHENIX is well-positioned to advance the study of the quark-gluon plasma with greater precision and improved resolution,” adds Hao-Ren Jheng, a graduate student in physics at MIT and a lead co-author of the new paper. “Probing the evolution, structure, and properties of the QGP will help us reconstruct the conditions of the early universe.”

The paper’s co-authors are all members of the sPHENIX Collaboration, which comprises over 300 scientists from multiple institutions around the world, including Roland, Jheng, and physicists at MIT’s Bates Research and Engineering Center.

“Gone in an instant”

Particle colliders such as Brookhaven’s RHIC are designed to accelerate particles at “relativistic” speeds, meaning close to the speed of light. When these particles are flung around in opposite, circulating beams and brought back together, any smash-ups that occur can release an enormous amount of energy. In the right conditions, this energy can very briefly exist in the form of quark-gluon plasma — the same stuff that sprung out of the Big Bang.

Just as in the early universe, quark-gluon plasma doesn’t hang around for very long in particle colliders. If and when QGP is produced, it exists for just 10 to the minus 22, or about a sextillionth, of a second. In this moment, quark-gluon plasma is incredibly hot, up to several trillion degrees Celsius, and behaves as a “perfect fluid,” moving as one entity rather than as a collection of random particles. Almost immediately, this exotic behavior disappears, and the plasma cools and transitions into more ordinary particles such as protons and neutrons, which stream out from the main collision.

“You never see the QGP itself — you just see its ashes, so to speak, in the form of the particles that come from its decay,” Roland says. “With sPHENIX, we want to measure these particles to reconstruct the properties of the QGP, which is essentially gone in an instant.”

“One in a billion”

The sPHENIX detector is the next generation of Brookhaven’s original Pioneering High Energy Nuclear Interaction eXperiment, or PHENIX, which measured collisions of heavy ions generated by RHIC. In 2021, sPHENIX was installed in place of its predecessor, as a faster and more powerful version, designed to detect quark-gluon plasma’s more subtle and ephemeral signatures.

The detector itself is about the size of a two-story house and weighs around 1,000 tons. It sits at the intersection of RHIC’s two main collider beams, where relativistic particles, accelerated from opposite directions, meet and collide, producing particles that fly out into the detector. The sPHENIX detector is able to catch and measure 15,000 particle collisions per second, thanks to its novel, layered components, including the MVTX, or micro-vertex — a subdetector that was designed, built, and installed by scientists at MIT’s Bates Research and Engineering Center.

Together, the detector’s systems enable sPHENIX to act as a giant 3D camera that can track the number, energy, and paths of individual particles during an explosion of particles generated by a single collision.

“SPHENIX takes advantage of developments in detector technology since RHIC switched on 25 years ago, to collect data at the fastest possible rate,” says MIT postdoc Cameron Dean, who was a main contributor to the new study’s analysis. “This allows us to probe incredibly rare processes for the first time.”

In the fall of 2024, scientists ran the detector through the “standard candle” test to gauge its speed and precision. Over three weeks, they gathered data from sPHENIX as the main collider accelerated and smashed together beams of gold ions traveling at the speed of light. Their analysis of the data showed that sPHENIX accurately measured the number of charged particles produced in individual gold ion collisions, as well as the particles’ energies. What’s more, the detector was sensitive to a collision’s “head-on-ness,” and could observe that head-on collisions produced more particles with greater energy, compared to less direct collisions.

“This measurement provides clear evidence that the detector is functioning as intended,” Jheng says.

“The fun for sPHENIX is just beginning,” Dean adds. “We are currently back colliding particles and expect to do so for several more months. With all our data, we can look for the one-in-a-billion rare process that could give us insights on things like the density of QGP, the diffusion of particles through ultra-dense matter, and how much energy it takes to bind different particles together.”

This work was supported, in part, by the U.S. Department of Energy Office of Science, and the National Science Foundation.


Locally produced proteins help mitochondria function

Researchers developed an approach to study where proteins get made, and characterized proteins produced near mitochondria, gaining potential insights into mitochondrial function and disease.


Our cells produce a variety of proteins, each with a specific role that, in many cases, means that they need to be in a particular part of the cell where that role is needed. One of the ways that cells ensure certain proteins end up in the right location at the right time is through localized translation, a process that ensures that proteins are made — or translated — close to where they will be needed. MIT professor of biology and Whitehead Institute for Biomedical Research member Jonathan Weissman and colleagues have studied localized translation in order to understand how it affects cell functions and allows cells to quickly respond to changing conditions.

Now, Weissman, who is also a Howard Hughes Medical Institute Investigator, and postdoc in his lab Jingchuan Luo have expanded our knowledge of localized translation at mitochondria, structures that generate energy for the cell. In an open-access paper published today in Cell, they share a new tool, LOCL-TL, for studying localized translation in close detail, and describe the discoveries it enabled about two classes of proteins that are locally translated at mitochondria.

The importance of localized translation at mitochondria relates to their unusual origin. Mitochondria were once bacteria that lived within our ancestors’ cells. Over time, the bacteria lost their autonomy and became part of the larger cells, which included migrating most of their genes into the larger cell’s genome in the nucleus. Cells evolved processes to ensure that proteins needed by mitochondria that are encoded in genes in the larger cell’s genome get transported to the mitochondria. Mitochondria retain a few genes in their own genome, so production of proteins from the mitochondrial genome and that of the larger cell’s genome must be coordinated to avoid mismatched production of mitochondrial parts. Localized translation may help cells to manage the interplay between mitochondrial and nuclear protein production — among other purposes.

How to detect local protein production

For a protein to be made, genetic code stored in DNA is read into RNA, and then the RNA is read or translated by a ribosome, a cellular machine that builds a protein according to the RNA code. Weissman’s lab previously developed a method to study localized translation by tagging ribosomes near a structure of interest, and then capturing the tagged ribosomes in action and observing the proteins they are making. This approach, called proximity-specific ribosome profiling, allows researchers to see what proteins are being made where in the cell. The challenge that Luo faced was how to tweak this method to capture only ribosomes at work near mitochondria.

Ribosomes work quickly, so a ribosome that gets tagged while making a protein at the mitochondria can move on to making other proteins elsewhere in the cell in a matter of minutes. The only way researchers can guarantee that the ribosomes they capture are still working on proteins made near the mitochondria is if the experiment happens very quickly.

Weissman and colleagues had previously solved this time sensitivity problem in yeast cells with a ribosome-tagging tool called BirA that is activated by the presence of the molecule biotin. BirA is fused to the cellular structure of interest, and tags ribosomes it can touch — but only once activated. Researchers keep the cell depleted of biotin until they are ready to capture the ribosomes, to limit the time when tagging occurs. However, this approach does not work with mitochondria in mammalian cells because they need biotin to function normally, so it cannot be depleted.

Luo and Weissman adapted the existing tool to respond to blue light instead of biotin. The new tool, LOV-BirA, is fused to the mitochondrion’s outer membrane. Cells are kept in the dark until the researchers are ready. Then they expose the cells to blue light, activating LOV-BirA to tag ribosomes. They give it a few minutes and then quickly extract the ribosomes. This approach proved very accurate at capturing only ribosomes working at mitochondria.

The researchers then used a method originally developed by the Weissman lab to extract the sections of RNA inside of the ribosomes. This allows them to see exactly how far along in the process of making a protein the ribosome is when captured, which can reveal whether the entire protein is made at the mitochondria, or whether it is partly produced elsewhere and only gets completed at the mitochondria.

“One advantage of our tool is the granularity it provides,” Luo says. “Being able to see what section of the protein is locally translated helps us understand more about how localized translation is regulated, which can then allow us to understand its dysregulation in disease and to control localized translation in future studies.”

Two protein groups are made at mitochondria

Using these approaches, the researchers found that about 20 percent of the genes needed in mitochondria that are located in the main cellular genome are locally translated at mitochondria. These proteins can be divided into two distinct groups with different evolutionary histories and mechanisms for localized translation.

One group consists of relatively long proteins, each containing more than 400 amino acids or protein building blocks. These proteins tend to be of bacterial origin — present in the ancestor of mitochondria — and they are locally translated in both mammalian and yeast cells, suggesting that their localized translation has been maintained through a long evolutionary history.

Like many mitochondrial proteins encoded in the nucleus, these proteins contain a mitochondrial targeting sequence (MTS), a ZIP code that tells the cell where to bring them. The researchers discovered that most proteins containing an MTS also contain a nearby inhibitory sequence that prevents transportation until they are done being made. This group of locally translated proteins lacks the inhibitory sequence, so they are brought to the mitochondria during their production.

Production of these longer proteins begins anywhere in the cell, and then after approximately the first 250 amino acids are made, they get transported to the mitochondria. While the rest of the protein gets made, it is simultaneously fed into a channel that brings it inside the mitochondrion. This ties up the channel for a long time, limiting import of other proteins, so cells can only afford to do this simultaneous production and import for select proteins. The researchers hypothesize that these bacterial-origin proteins are given priority as an ancient mechanism to ensure that they are accurately produced and placed within mitochondria.

The second locally translated group consists of short proteins, each less than 200 amino acids long. These proteins are more recently evolved, and correspondingly, the researchers found that the mechanism for their localized translation is not shared by yeast. Their mitochondrial recruitment happens at the RNA level. Two sequences within regulatory sections of each RNA molecule that do not encode the final protein instead code for the cell’s machinery to recruit the RNAs to the mitochondria.

The researchers searched for molecules that might be involved in this recruitment, and identified the RNA binding protein AKAP1, which exists at mitochondria. When they eliminated AKAP1, the short proteins were translated indiscriminately around the cell. This provided an opportunity to learn more about the effects of localized translation, by seeing what happens in its absence. When the short proteins were not locally translated, this led to the loss of various mitochondrial proteins, including those involved in oxidative phosphorylation, our cells’ main energy generation pathway.

In future research, Weissman and Luo will delve deeper into how localized translation affects mitochondrial function and dysfunction in disease. The researchers also intend to use LOCL-TL to study localized translation in other cellular processes, including in relation to embryonic development, neural plasticity, and disease.

“This approach should be broadly applicable to different cellular structures and cell types, providing many opportunities to understand how localized translation contributes to biological processes,” Weissman says. “We’re particularly interested in what we can learn about the roles it may play in diseases including neurodegeneration, cardiovascular diseases, and cancers.”


Professor Emeritus Rainer Weiss, influential physicist who forged new paths to understanding the universe, dies at 92

The longtime MIT professor shared a Nobel Prize for his role in developing the LIGO observatory and detecting gravitational waves.


MIT Professor Emeritus Rainer Weiss ’55, PhD ’62, a renowned experimental physicist and Nobel laureate whose groundbreaking work confirmed a longstanding prediction about the nature of the universe, passed away on Aug. 25. He was 92.

Weiss conceived of the Laser Interferometer Gravitational-Wave Observatory (LIGO) for detecting ripples in space-time known as gravitational waves, and was later a leader of the team that built LIGO and achieved the first-ever detection of gravitational waves. He shared the Nobel Prize in Physics for this work in 2017. Together with international collaborators, he and his colleagues at LIGO would go on to detect many more of these cosmic reverberations, opening up a new way for scientists to view the universe.

During his remarkable career, Weiss also developed a more precise atomic clock and figured out how to measure the spectrum of the cosmic microwave background via a weather balloon. He later co-founded and advanced the NASA Cosmic Background Explorer project, whose measurements helped support the Big Bang theory describing the expansion of the universe.

“Rai leaves an indelible mark on science and a gaping hole in our lives,” says Nergis Mavalvala PhD ’97, dean of the MIT School of Science and the Curtis and Kathleen Marble Professor of Astrophysics. As a doctoral student with Weiss in the 1990s, Mavalvala worked with him to build an early prototype of a gravitational-wave detector as part of her PhD thesis. “He will be so missed but has also gifted us a singular legacy. Every gravitational wave event we observe will remind us of him, and we will smile. I am indeed heartbroken, but also so grateful for having him in my life, and for the incredible gifts he has given us — of passion for science and discovery, but most of all to always put people first.” she says.

A member of the MIT physics faculty since 1964, Weiss was known as a committed mentor and teacher, as well as a dedicated researcher. 

“Rai’s ingenuity and insight as an experimentalist and a physicist were legendary,” says Deepto Chakrabarty, the William A. M. Burden Professor in Astrophysics and head of the Department of Physics. “His no-nonsense style and gruff manner belied a very close, supportive and collaborative relationship with his students, postdocs, and other mentees. Rai was a thoroughly MIT product.”

“Rai held a singular position in science: He was the creator of two fields — measurements of the cosmic microwave background and of gravitational waves. His students have gone on to lead both fields and carried Rai’s rigor and decency to both. He not only created a huge part of important science, he also populated them with people of the highest caliber and integrity,” says Peter Fisher, the Thomas A. Frank Professor of Physics and former head of the physics department.

Enabling a new era in astrophysics

LIGO is a system of two identical detectors located 1,865 miles apart. By sending finely tuned lasers back and forth through the detectors, scientists can detect perturbations caused by gravitational waves, whose existence was proposed by Albert Einstein. These discoveries illuminate ancient collisions and other events in the early universe, and have confirmed Einstein’s theory of general relativity. Today, the LIGO Scientific Collaboration involves hundreds of scientists at MIT, Caltech, and other universities, and with the Virgo and KAGRA observatories in Italy and Japan makes up the global LVK Collaboration — but five decades ago, the instrument concept was an MIT class exercise conceived by Weiss.

As he told MIT News in 2017, in generating the initial idea, Weiss wondered: “What’s the simplest thing I can think of to show these students that you could detect the influence of a gravitational wave?”

To realize the audacious design, Weiss teamed up in 1976 with physicist Kip Thorne, who, based in part on conversations with Weiss, soon seeded the creation of a gravitational wave experiment group at Caltech. The two formed a collaboration between MIT and Caltech, and in 1979, the late Scottish physicist Ronald Drever, then of the University of Glasgow, joined the effort at Caltech. The three scientists — who became the co-founders of LIGO — worked to refine the dimensions and scientific requirements for an instrument sensitive enough to detect a gravitational wave. Barry Barish later joined the team at Caltech, helping to secure funding and bring the detectors to completion.

After receiving support from the National Science Foundation, LIGO broke ground in the mid-1990s, constructing interferometric detectors in Hanford, Washington, and in Livingston, Louisiana. 

Years later, when he shared the Nobel Prize with Thorne and Barish for his work on LIGO, Weiss noted that hundreds of colleagues had helped to push forward the search for gravitational waves.

“The discovery has been the work of a large number of people, many of whom played crucial roles,” Weiss said at an MIT press conference. “I view receiving this [award] as sort of a symbol of the various other people who have worked on this.”

He continued: “This prize and others that are given to scientists is an affirmation by our society of [the importance of] gaining information about the world around us from reasoned understanding of evidence.”

“While I have always been amazed and guided by Rai’s ingenuity, integrity, and humility, I was most impressed by his breadth of vision and ability to move between worlds,” says Matthew Evans, the MathWorks Professor of Physics. “He could seamlessly shift from the smallest technical detail of an instrument to the global vision for a future observatory. In the last few years, as the idea for a next-generation gravitational-wave observatory grew, Rai would often be at my door, sharing ideas for how to move the project forward on all levels. These discussions ranged from quantum mechanics to global politics, and Rai’s insights and efforts have set the stage for the future.”

A lifelong fascination with hard problems

Weiss was born in 1932 in Berlin. The young family fled Nazi Germany to Prague and then emigrated to New York City, where Weiss grew up with a love for classical music and electronics, earning money by fixing radios.

He enrolled at MIT, then dropped out of school in his junior year, only to return shortly after, taking a job as a technician in the former Building 20. There, Weiss met physicist Jerrold Zacharias, who encouraged him in finishing his undergraduate degree in 1955 and his PhD in 1962.

Weiss spent some time at Princeton University as a postdoc in the legendary group led by Robert Dicke, where he developed experiments to test gravity. He returned to MIT as an assistant professor in 1964, starting a new research group in the Research Laboratory of Electronics dedicated to research in cosmology and gravitation.

With the money he received from the Nobel Prize, Weiss established the Barish-Weiss Fellowship to support student research in the MIT Department of Physics.

Weiss received numerous awards and honors in addition to the Nobel Prize, including the Medaille de l’ADION, the 2006 Gruber Prize in Cosmology, and the 2007 Einstein Prize of the American Physical Society. He was a fellow of the American Association for the Advancement of Science, the American Academy of Arts and Sciences, and the American Physical Society, as well as a member of the National Academy of Sciences. In 2016, Weiss received a Special Breakthrough Prize in Fundamental Physics, the Gruber Prize in Cosmology, the Shaw Prize in Astronomy, and the Kavli Prize in Astrophysics, all shared with Drever and Thorne. He also shared the Princess of Asturias Award for Technical and Scientific Research with Thorne, Barry Barish of Caltech, and the LIGO Scientific Collaboration.

Weiss is survived by his wife, Rebecca; his daughter, Sarah, and her husband, Tony; his son, Benjamin, and his wife, Carla; and a grandson, Sam, and his wife, Constance. Details about a memorial are forthcoming.

This article may be updated.


Simpler models can outperform deep learning at climate prediction

New research shows the natural variability in climate data can cause AI models to struggle at predicting local temperature and rainfall.


Environmental scientists are increasingly using enormous artificial intelligence models to make predictions about changes in weather and climate, but a new study by MIT researchers shows that bigger models are not always better.

The team demonstrates that, in certain climate scenarios, much simpler, physics-based models can generate more accurate predictions than state-of-the-art deep-learning models.

Their analysis also reveals that a benchmarking technique commonly used to evaluate machine-learning techniques for climate predictions can be distorted by natural variations in the data, like fluctuations in weather patterns. This could lead someone to believe a deep-learning model makes more accurate predictions when that is not the case.

The researchers developed a more robust way of evaluating these techniques, which shows that, while simple models are more accurate when estimating regional surface temperatures, deep-learning approaches can be the best choice for estimating local rainfall.

They used these results to enhance a simulation tool known as a climate emulator, which can rapidly simulate the effect of human activities onto a future climate.

The researchers see their work as a “cautionary tale” about the risk of deploying large AI models for climate science. While deep-learning models have shown incredible success in domains such as natural language, climate science contains a proven set of physical laws and approximations, and the challenge becomes how to incorporate those into AI models.

“We are trying to develop models that are going to be useful and relevant for the kinds of things that decision-makers need going forward when making climate policy choices. While it might be attractive to use the latest, big-picture machine-learning model on a climate problem, what this study shows is that stepping back and really thinking about the problem fundamentals is important and useful,” says study senior author Noelle Selin, a professor in the MIT Institute for Data, Systems, and Society (IDSS) and the Department of Earth, Atmospheric and Planetary Sciences (EAPS), and director of the Center for Sustainability Science and Strategy.

Selin’s co-authors are lead author Björn Lütjens, a former EAPS postdoc who is now a research scientist at IBM Research; senior author Raffaele Ferrari, the Cecil and Ida Green Professor of Oceanography in EAPS and co-director of the Lorenz Center; and Duncan Watson-Parris, assistant professor at the University of California at San Diego. Selin and Ferrari are also co-principal investigators of the Bringing Computation to the Climate Challenge project, out of which this research emerged. The paper appears today in the Journal of Advances in Modeling Earth Systems.

Comparing emulators

Because the Earth’s climate is so complex, running a state-of-the-art climate model to predict how pollution levels will impact environmental factors like temperature can take weeks on the world’s most powerful supercomputers.

Scientists often create climate emulators, simpler approximations of a state-of-the art climate model, which are faster and more accessible. A policymaker could use a climate emulator to see how alternative assumptions on greenhouse gas emissions would affect future temperatures, helping them develop regulations.

But an emulator isn’t very useful if it makes inaccurate predictions about the local impacts of climate change. While deep learning has become increasingly popular for emulation, few studies have explored whether these models perform better than tried-and-true approaches.

The MIT researchers performed such a study. They compared a traditional technique called linear pattern scaling (LPS) with a deep-learning model using a common benchmark dataset for evaluating climate emulators.

Their results showed that LPS outperformed deep-learning models on predicting nearly all parameters they tested, including temperature and precipitation.

“Large AI methods are very appealing to scientists, but they rarely solve a completely new problem, so implementing an existing solution first is necessary to find out whether the complex machine-learning approach actually improves upon it,” says Lütjens.

Some initial results seemed to fly in the face of the researchers’ domain knowledge. The powerful deep-learning model should have been more accurate when making predictions about precipitation, since those data don’t follow a linear pattern.

They found that the high amount of natural variability in climate model runs can cause the deep learning model to perform poorly on unpredictable long-term oscillations, like El Niño/La Niña. This skews the benchmarking scores in favor of LPS, which averages out those oscillations.

Constructing a new evaluation

From there, the researchers constructed a new evaluation with more data that address natural climate variability. With this new evaluation, the deep-learning model performed slightly better than LPS for local precipitation, but LPS was still more accurate for temperature predictions.

“It is important to use the modeling tool that is right for the problem, but in order to do that you also have to set up the problem the right way in the first place,” Selin says.

Based on these results, the researchers incorporated LPS into a climate emulation platform to predict local temperature changes in different emission scenarios.

“We are not advocating that LPS should always be the goal. It still has limitations. For instance, LPS doesn’t predict variability or extreme weather events,” Ferrari adds.

Rather, they hope their results emphasize the need to develop better benchmarking techniques, which could provide a fuller picture of which climate emulation technique is best suited for a particular situation.

“With an improved climate emulation benchmark, we could use more complex machine-learning methods to explore problems that are currently very hard to address, like the impacts of aerosols or estimations of extreme precipitation,” Lütjens says.

Ultimately, more accurate benchmarking techniques will help ensure policymakers are making decisions based on the best available information.

The researchers hope others build on their analysis, perhaps by studying additional improvements to climate emulation methods and benchmarks. Such research could explore impact-oriented metrics like drought indicators and wildfire risks, or new variables like regional wind speeds.

This research is funded, in part, by Schmidt Sciences, LLC, and is part of the MIT Climate Grand Challenges team for “Bringing Computation to the Climate Challenge.”


Transforming boating, with solar power

Solar electric vehicle pioneer James Worden ’89 brought his prototype solar electric boat to MIT to talk shop with students and share his vision for solar-powered boats.


The MIT Sailing Pavilion hosted an altogether different marine vessel recently: a prototype of a solar electric boat developed by James Worden ’89, the founder of the MIT Solar Electric Vehicle Team (SEVT). Worden visited the pavilion on a sizzling, sunny day in late July to offer students from the SEVT, the MIT Edgerton Center, MIT Sea Grant, and the broader community an inside look at the Anita, named for his late wife.

Worden’s fascination with solar power began at age 10, when he picked up a solar chip at a “hippy-like” conference in his hometown of Arlington, Massachusetts. “My eyes just lit up,” he says. He built his first solar electric vehicle in high school, fashioned out of cardboard and wood (taking first place at the 1984 Massachusetts Science Fair), and continued his journey at MIT, founding SEVT in 1986. It was through SEVT that he met his wife and lifelong business partner, Anita Rajan Worden ’90. Together, they founded two companies in the solar electric and hybrid vehicles space, and in 2022 launched a solar electric boat company.

On the Charles River, Worden took visitors for short rides on Anita, including a group of current SEVT students who peppered him with questions. The 20-foot pontoon boat, just 12 feet wide and 7 feet tall, is made of carbon fiber composites, single crystalline solar photovoltaic cells, and lithium iron phosphate battery cells. Ultimately, Worden envisions the prototype could have applications as mini-ferry boats and water taxis.

With warmth and humor, he drew parallels between the boat’s components and mechanics and those of the solar cars the students are building. “It’s fun! If you think about all the stuff you guys are doing, it’s all the same stuff,” he told them, “optimizing all the different systems and making them work.” He also explained the design considerations unique to boating applications, like refining the hull shape for efficiency and maneuverability in variable water and wind conditions, and the critical importance of protecting wiring and controls from open water and condensate.

“Seeing Anita in all its glory was super cool,” says Nicole Lin, vice captain of SEVT. “When I first saw it, I could immediately map the different parts of the solar car to its marine counterparts, which was astonishing to see how far I’ve come as an engineer with SEVT. James also explained the boat using solar car terms, as he drew on his experience with solar cars for his solar boats. It blew my mind to see the engineering we learned with SEVT in action.”

Over the years, the Wordens have been avid supporters of SEVT and the Edgerton Center, so the visit was, in part, a way to pay it forward to MIT. “There’s a lot of connections,” he says. He’s still awed by the fact that Harold “Doc” Edgerton, upon learning about his interest in building solar cars, carved out a lab space for him to use in Building 20 — as a first-year student. And a few years ago, as Worden became interested in marine vessels, he tapped Sea Grant Education Administrator Drew Bennett for a 90-minute whiteboard lecture, “MIT fire-hose style,” on hydrodynamics. “It was awesome!” he says.


Imaging tech promises deepest looks yet into living brain tissue at single-cell resolution

By combining several cutting-edge imaging technologies, a new microscope system could enable unprecedentedly deep and precise visualization of metabolic and neuronal activity, potentially even in humans.


For both research and medical purposes, researchers have spent decades pushing the limits of microscopy to produce ever deeper and sharper images of brain activity, not only in the cortex but also in regions underneath, such as the hippocampus. In a new study, a team of MIT scientists and engineers demonstrates a new microscope system capable of peering exceptionally deep into brain tissues to detect the molecular activity of individual cells by using sound.

“The major advance here is to enable us to image deeper at single-cell resolution,” says neuroscientist Mriganka Sur, a corresponding author along with mechanical engineering professor Peter So and principal research scientist Brian Anthony. Sur is the Paul and Lilah Newton Professor in The Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences at MIT.

In the journal Light: Science and Applications, the team demonstrates that they could detect NAD(P)H, a molecule tightly associated with cell metabolism in general and electrical activity in neurons in particular, all the way through samples such as a 1.1-millimeter “cerebral organoid,” a 3D-mini brain-like tissue generated from human stem cells, and a 0.7-milimeter-thick slice of mouse brain tissue.

In fact, says co-lead author and mechanical engineering postdoc W. David Lee, who conceived the microscope’s innovative design, the system could have peered far deeper, but the test samples weren’t big enough to demonstrate that.

“That’s when we hit the glass on the other side,” he says. “I think we’re pretty confident about going deeper.”

Still, a depth of 1.1 milimeters is more than five times deeper than other microscope technologies can resolve NAD(P)H within dense brain tissue. The new system achieved the depth and sharpness by combining several advanced technologies to precisely and efficiently excite the molecule and then to detect the resulting energy, all without having to add any external labels, either via added chemicals or genetically engineered fluorescence.

Rather than focusing the required NAD(P)H excitation energy on a neuron with near ultraviolet light at its normal peak absorption, the scope accomplishes the excitation by focusing an intense, extremely short burst of light (a quadrillionth of a second long) at three times the normal absorption wavelength. Such “three-photon” excitation penetrates deep into tissue with less scattering by brain tissue because of the longer wavelength of the light (“like fog lamps,” Sur says). Meanwhile, although the excitation produces a weak fluorescent signal of light from NAD(P)H, most of the absorbed energy produces a localized (about 10 microns) thermal expansion within the cell, which produces sound waves that travel relatively easily through tissue compared to the fluorescence emission. A sensitive ultrasound microphone in the microscope detects those waves and, with enough sound data, software turns them into high-resolution images (much like a sonogram does). Imaging created in this way is “three-photon photoacoustic imaging.”

“We merged all these techniques — three-photon, label-free, photoacoustic detection,” says co-lead author Tatsuya Osaki, a research scientist in the Picower Institute in Sur’s lab. “We integrated all these cutting-edge techniques into one process to establish this ‘Multiphoton-In and Acoustic-Out’ platform.”

Lee and Osaki combined with research scientist Xiang Zhang and postdoc Rebecca Zubajlo to lead the study, in which the team demonstrated reliable detection of the sound signal through the samples. So far, the team has produced visual images from the sound at various depths as they refine their signal processing.

In the study, the team also shows simultaneous “third-harmonic generation” imaging, which comes from the three-photon stimulation and finely renders cellular structures, alongside their photoacoustic imaging, which detects NAD(P)H. They also note that their photoacoustic method could detect other molecules such as the genetically encoded calcium indicator GCaMP, that neuroscientists use to signal neural electrical activity.

With the concept of label-free, multiphoton, photoacoustic microscopy (LF-MP-PAM) established in the paper, the team is now looking ahead to neuroscience and clinical applications.

For instance, through the company Precision Healing, Inc., which he founded and sold, Lee has already established that NAD(P)H imaging can inform wound care. In the brain, levels of the molecule are known to vary in conditions such as Alzheimer’s disease, Rett syndrome, and seizures, making it a potentially valuable biomarker. Because the new system is label-free (i.e., no added chemicals or altered genes), it could be used in humans, for instance, during brain surgeries.

The next step for the team is to demonstrate it in a living animal, rather than just in in vitro and ex-vivo tissues. The technical challenge there is that the microphone can no longer be on the opposite side of the sample from the light source (as it was in the current study). It has to be on top, just like the light source.

Lee says he expects that full imaging at depths of 2 milimeters in live brains is entirely feasible, given the results in the new study.

“In principle, it should work,” he says.

Mercedes Balcells and Elazer Edelman are also authors of the paper. Funding for the research came from sources including the National Institutes of Health, the Simon Center for the Social Brain, the lab of Peter So, The Picower Institute for Learning and Memory, and the Freedom Together Foundation.


Astronomers detect the brightest fast radio burst of all time

The dazzling “RBFLOAT” radio burst, originating in a nearby galaxy, offers the clearest view yet of the environment around these mysterious flashes.


A fast radio burst is an immense flash of radio emission that lasts for just a few milliseconds, during which it can momentarily outshine every other radio source in its galaxy. These flares can be so bright that their light can be seen from halfway across the universe, several billion light years away.

The sources of these brief and dazzling signals are unknown. But scientists now have a chance to study a fast radio burst (FRB) in unprecedented detail. An international team of scientists including physicists at MIT have detected a near and ultrabright fast radio burst some 130 million light-years from Earth in the constellation Ursa Major. It is one of the closest FRBs detected to date. It is also the brightest — so bright that the signal has garnered the informal moniker, RBFLOAT, for “radio brightest flash of all time.”

The burst’s brightness, paired with its proximity, is giving scientists the closest look yet at FRBs and the environments from which they emerge.

“Cosmically speaking, this fast radio burst is just in our neighborhood,” says Kiyoshi Masui, associate professor of physics and affiliate of MIT’s Kavli Institute for Astrophysics and Space Research. “This means we get this chance to study a pretty normal FRB in exquisite detail.”

Masui and his colleagues report their findings today in the Astrophysical Journal Letters.

Diverse bursts

The clarity of the new detection is thanks to a significant upgrade to The Canadian Hydrogen Intensity Mapping Experiment (CHIME), a large array of halfpipe-shaped antennae based in British Columbia. CHIME was originally designed to detect and map the distribution of hydrogen across the universe. The telescope is also sensitive to ultrafast and bright radio emissions. Since it started observations in 2018, CHIME has detected about 4,000 fast radio bursts, from all parts of the sky. But the telescope had not been able to precisely pinpoint the location of each fast radio burst, until now.

CHIME recently got a significant boost in precision, in the form of CHIME Outriggers — three miniature versions of CHIME, each sited in different parts of North America. Together, the telescopes work as one continent-sized system that can focus in on any bright flash that CHIME detects, to pin down its location in the sky with extreme precision.

“Imagine we are in New York and there’s a firefly in Florida that is bright for a thousandth of a second, which is usually how quick FRBs are,” says MIT Kavli graduate student Shion Andrew. “Localizing an FRB to a specific part of its host galaxy is analogous to figuring out not just what tree the firefly came from, but which branch it’s sitting on.”

The new fast radio burst is the first detection made using the combination of CHIME and the completed CHIME Outriggers. Together, the telescope array identified the FRB and determined not only the specific galaxy, but also the region of the galaxy from where the burst originated. It appears that the burst arose from the edge of the galaxy, just outside of a star-forming region. The precise localization of the FRB is allowing scientists to study the environment around the signal for clues to what brews up such bursts.

“As we’re getting these much more precise looks at FRBs, we’re better able to see the diversity of environments they’re coming from,” says MIT physics postdoc Adam Lanman.

Lanman, Andrew, and Masui are members of the CHIME Collaboration — which includes scientists from multiple institutions around the world — and are authors of the new paper detailing the discovery of the new FRB detection.

An older edge

Each of CHIME’s Outrigger stations continuously monitors the same swath of sky as the parent CHIME array. Both CHIME and the Outriggers “listen” for radio flashes, at incredibly short, millisecond timescales. Even over several minutes, such precision monitoring can amount to a huge amount of data. If CHIME detects no FRB signal, the Outriggers automatically delete the last 40 seconds of data to make room for the next span of measurements.

On March 16, 2025, CHIME detected an ultrabright flash of radio emissions, which automatically triggered the CHIME Outriggers to record the data. Initially, the flash was so bright that astronomers were unsure whether it was an FRB or simply a terrestrial event caused, for instance, by a burst of cellular communications.

That notion was put to rest as the CHIME Outrigger telescopes focused in on the flash and pinned down its location to NGC4141 — a spiral galaxy in the constellation Ursa Major about 130 million light years away, which happens to be surprisingly close to our own Milky Way. The detection is one of the closest and brightest fast radio bursts detected to date.

Follow-up observations in the same region revealed that the burst came from the very edge of an active region of star formation. While it’s still a mystery as to what source could produce FRBs, scientists’ leading hypothesis points to magnetars — young neutron stars with extremely powerful magnetic fields that can spin out high-energy flares across the electromagnetic spectrum, including in the radio band. Physicists suspect that magnetars are found in the center of star formation regions, where the youngest, most active stars are forged. The location of the new FRB, just outside a star-forming region in its galaxy, may suggest that the source of the burst is a slightly older magnetar.

“These are mostly hints,” Masui says. “But the precise localization of this burst is letting us dive into the details of how old an FRB source could be. If it were right in the middle, it would only be thousands of years old — very young for a star. This one, being on the edge, may have had a little more time to bake.”

No repeats

In addition to pinpointing where the new FRB was in the sky, the scientists also looked back through CHIME data to see whether any similar flares occurred in the same region in the past. Since the first FRB was discovered in 2007, astronomers have detected over 4,000 radio flares. Most of these bursts are one-offs. But a few percent have been observed to repeat, flashing every so often. And an even smaller fraction of these repeaters flash in a pattern, like a rhythmic heartbeat, before flaring out. A central question surrounding fast radio bursts is whether repeaters and nonrepeaters come from different origins.

The scientists looked through CHIME’s six years of data and came up empty: This new FRB appears to be a one-off, at least in the last six years. The findings are particularly exciting, given the burst’s proximity. Because it is so close and so bright, scientists can probe the environment in and around the burst for clues to what might produce a nonrepeating FRB.

“Right now we’re in the middle of this story of whether repeating and nonrepeating FRBs are different. These observations are putting together bits and pieces of the puzzle,” Masui says.

“There’s evidence to suggest that not all FRB progenitors are the same,” Andrew adds. “We’re on track to localize hundreds of FRBs every year. The hope is that a larger sample of FRBs localized to their host environments can help reveal the full diversity of these populations.”

The construction of the CHIME Outriggers was funded by the Gordon and Betty Moore Foundation and the U.S. National Science Foundation. The construction of CHIME was funded by the Canada Foundation for Innovation and provinces of Quebec, Ontario, and British Columbia.


Learning from punishment

A new computational model makes sense of the cognitive processes humans use to evaluate punishment.


From toddlers’ timeouts to criminals’ prison sentences, punishment reinforces social norms, making it known that an offender has done something unacceptable. At least, that is usually the intent — but the strategy can backfire. When a punishment is perceived as too harsh, observers can be left with the impression that an authority figure is motivated by something other than justice.

It can be hard to predict what people will take away from a particular punishment, because everyone makes their own inferences not just about the acceptability of the act that led to the punishment, but also the legitimacy of the authority who imposed it. A new computational model developed by scientists at MIT’s McGovern Institute for Brain Research makes sense of these complicated cognitive processes, recreating the ways people learn from punishment and revealing how their reasoning is shaped by their prior beliefs.

Their work, reported Aug. 4 in the journal PNAS, explains how a single punishment can send different messages to different people, and even strengthen the opposing viewpoints of groups who hold different opinions about authorities or social norms.

“The key intuition in this model is the fact that you have to be evaluating simultaneously both the norm to be learned and the authority who’s punishing,” says McGovern investigator and John W. Jarve Professor of Brain and Cognitive Sciences Rebecca Saxe, who led the research. “One really important consequence of that is even where nobody disagrees about the facts — everybody knows what action happened, who punished it, and what they did to punish it — different observers of the same situation could come to different conclusions.”

For example, she says, a child who is sent to timeout after biting a sibling might interpret the event differently than the parent. One might see the punishment as proportional and important, teaching the child not to bite. But if the biting, to the toddler, seemed a reasonable tactic in the midst of a squabble, the punishment might be seen as unfair, and the lesson will be lost.

People draw on their own knowledge and opinions when they evaluate these situations — but to study how the brain interprets punishment, Saxe and graduate student Setayesh Radkani wanted to take those personal ideas out of the equation. They needed a clear understanding of the beliefs that people held when they observed a punishment, so they could learn how different kinds of information altered their perceptions. So Radkani set up scenarios in imaginary villages where authorities punished individuals for actions that had no obvious analog in the real world.

Participants observed these scenarios in a series of experiments, with different information offered in each one. In some cases, for example, participants were told that the person being punished was either an ally or a competitor of the authority, whereas in other cases, the authority’s possible bias was left ambiguous.

“That gives us a really controlled setup to vary prior beliefs,” Radkani explains. “We could ask what people learn from observing punitive decisions with different severities, in response to acts that vary in their level of wrongness, by authorities that vary in their level of different motives.”

For each scenario, participants were asked to evaluate four factors: how much the authority figure cared about justice; the selfishness of the authority; the authority’s bias for or against the individual being punished; and the wrongness of the punished act. The research team asked these questions when participants were first introduced to the hypothetical society, then tracked how their responses changed after they observed the punishment. Across the scenarios, participants’ initial beliefs about the authority and the wrongness of the act shaped the extent to which those beliefs shifted after they observed the punishment.

Radkani was able to replicate these nuanced interpretations using a cognitive model framed around an idea that Saxe’s team has long used to think about how people interpret the actions of others. That is, to make inferences about others’ intentions and beliefs, we assume that people choose actions that they expect will help them achieve their goals.

To apply that concept to the punishment scenarios, Radkani developed a model that evaluates the meaning of a punishment (an action aimed at achieving a goal of the authority) by considering the harm associated with that punishment; its costs or benefits to the authority; and its proportionality to the violation. By assessing these factors, along with prior beliefs about the authority and the punished act, the model was able to predict people’s responses to the hypothetical punishment scenarios, supporting the idea that people use a similar mental model. “You need to have them consider those things, or you can’t make sense of how people understand punishment when they observe it,” Saxe says.

Even though the team designed their experiments to preclude preconceived ideas about the people and actions in their imaginary villages, not everyone drew the same conclusions from the punishments they observed. Saxe’s group found that participants’ general attitudes toward authority influenced their interpretation of events. Those with more authoritarian attitudes — assessed through a standard survey — tended to judge punished acts as more wrong and authorities as more motivated by justice than other observers.

“If we differ from other people, there’s a knee-jerk tendency to say, ‘either they have different evidence from us, or they’re crazy,’” Saxe says. Instead, she says, “It’s part of the way humans think about each other’s actions.”

“When a group of people who start out with different prior beliefs get shared evidence, they will not end up necessarily with shared beliefs. That’s true even if everybody is behaving rationally,” says Saxe.

This way of thinking also means that the same action can simultaneously strengthen opposing viewpoints. The Saxe lab’s modeling and experiments showed that when those viewpoints shape individuals’ interpretations of future punishments, the groups’ opinions will continue to diverge. For instance, a punishment that seems too harsh to a group who suspects an authority is biased can make that group even more skeptical of the authority’s future actions. Meanwhile, people who see the same punishment as fair and the authority as just will be more likely to conclude that the authority figure’s future actions are also just. 

“You will get a vicious cycle of polarization, staying and actually spreading to new things,” says Radkani.

The researchers say their findings point toward strategies for communicating social norms through punishment. “It is exactly sensible in our model to do everything you can to make your action look like it’s coming out of a place of care for the long-term outcome of this individual, and that it’s proportional to the norm violation they did,” Saxe says. “That is your best shot at getting a punishment interpreted pedagogically, rather than as evidence that you’re a bully.”

Nevertheless, she says that won’t always be enough. “If the beliefs are strong the other way, it’s very hard to punish and still sustain a belief that you were motivated by justice.”

Joining Saxe and Radkani on the paper is Joshua Tenenbaum, MIT professor of brain and cognitive sciences. The study was funded, in part, by the Patrick J McGovern Foundation.


A boost for the precision of genome editing

Researchers develop a fast-acting, cell-permeable protein system to control CRISPR-Cas9, reducing off-target effects and advancing gene therapy.


The U.S. Food and Drug Administration’s recent approval of the first CRISPR-Cas9–based gene therapy has marked a major milestone in biomedicine, validating genome editing as a promising treatment strategy for disorders like sickle cell disease, muscular dystrophy, and certain cancers.

CRISPR-Cas9, often likened to “molecular scissors,” allows scientists to cut DNA at targeted sites to snip, repair, or replace genes. But despite its power, Cas9 poses a critical safety risk: The active enzyme can linger in cells and cause unintended DNA breaks — so-called off-target effects — which may trigger harmful mutations in healthy genes.

Now, researchers in the labs of Ronald T. Raines, MIT professor of chemistry, and Amit Choudhary, professor of medicine at Harvard Medical School, have engineered a precise way to turn Cas9 off after its job is done — significantly reducing off-target effects and improving the clinical safety of gene editing. Their findings are detailed in a new paper published in the Proceedings of the National Academy of Sciences (PNAS).

“To ‘turn off’ Cas9 after it achieves its intended genome-editing outcome, we developed the first cell-permeable anti-CRISPR protein system,” says Raines, the Roger and Georges Firmenich Professor of Natural Products Chemistry. “Our technology reduces the off-target activity of Cas9 and increases its genome-editing specificity and clinical utility.”

The new tool — called LFN-Acr/PA — uses a protein-based delivery system to ferry anti-CRISPR proteins into human cells rapidly and efficiently. While natural Type II anti-CRISPR proteins (Acrs) are known to inhibit Cas9, their use in therapy has been limited because they’re often too bulky or charged to enter cells, and conventional delivery methods are too slow or ineffective.

LFN-Acr/PA overcomes these hurdles using a component derived from anthrax toxin to introduce Acrs into cells within minutes. Even at picomolar concentrations, the system shuts down Cas9 activity with remarkable speed and precision — boosting genome-editing specificity up to 40 percent.

Bradley L. Pentelute, MIT professor of chemistry, is an expert on the anthrax delivery system, and is also an author of the paper.

The implications of this advance are wide-ranging. With patent applications filed, LFN-Acr/PA represents a faster, safer, and more controllable means of harnessing CRISPR-Cas9, opening the door to more-refined gene therapies with fewer unintended consequences.

The research was supported by the National Institutes of Health and a Gilliam Fellowship from the Howard Hughes Medical Institute awarded to lead author Axel O. Vera, a graduate student in the Department of Chemistry.


Materials Research Laboratory: Driving interdisciplinary materials research at MIT

The MRL helps bring together academia, government, and industry to accelerate innovation in sustainability, energy, and advanced materials.


Materials research thrives across MIT, spanning disciplines and departments. Recent breakthroughs include strategies for securing sustainable supplies of nickel — critical to clean-energy technologies (Department of Materials Science and Engineering); the discovery of unexpected magnetism in atomically thin quantum materials (Department of Physics); and the development of adhesive coatings that reduce scarring around medical implants (departments of Mechanical Engineering and Civil and Environmental Engineering).

At the center of these efforts is the Materials Research Laboratory (MRL), a hub that connects and supports the Institute’s materials research community. “MRL serves as a home for the entire materials research community at MIT,” says C. Cem Tasan, who became director in April 2025. “Our goal is to make it easier for our faculty to conduct their extraordinary research,” adds Tasan, the POSCO Associate Professor of Metallurgy in the Department of Materials Science and Engineering.

A storied history

Established in 2017, the MRL brings together more than 30 researchers and builds on a 48-year legacy of innovation. It was formed through the merger of the MIT Materials Processing Center (MPC) and the Center for Materials Science and Engineering (CMSE), two institutions that helped lay the foundation for MIT’s global leadership in materials science.

Over the years, research supported by MPC and CMSE has led to transformative technologies and successful spinout companies. Notable examples include amsc, based on advances in superconductivity; OmniGuide, which developed cutting-edge optical fiber technologies; and QD Vision, a pioneer in quantum dot technology acquired by Samsung in 2016. Another landmark achievement was the development of the first germanium laser to operate at room temperature — a breakthrough now used in optical communications.

Enabling research through partnership and support

MRL is launching targeted initiatives to connect MIT researchers with industry partners around specific technical challenges. Each initiative will be led by a junior faculty member working closely with MRL to identify a problem that aligns with their research expertise and is relevant to industry needs.

Through multi-year collaborations with participating companies, faculty can explore early-stage solutions in partnership with postdocs or graduate students. These initiatives are designed to be agile and interdisciplinary, with the potential to grow into major, long-term research programs.

Behind-the-scenes support, front-line impact

MRL provides critical infrastructure that enables faculty to focus on discovery, not logistics. “MRL works silently in the background, where every problem a principal investigator has related to the administration of materials research is solved with efficiency, good organization, and minimum effort,” says Tasan.

This quiet but powerful support spans multiple areas:

Together, these functions ensure that research at MRL runs smoothly and effectively — from initial idea to lasting innovation.

Leadership with a vision

Tasan, who also leads a research group focused on metallurgy, says he took on the directorship because “I thrive on new challenges.” He also saw the role as an opportunity to contribute more broadly to MIT. 

“I believe MRL can play an even greater role in advancing materials research across the Institute, and I’m excited to help make that happen,” he says.


Professor John Joannopoulos, photonics pioneer and Institute for Soldier Nanotechnologies director, dies at 78

Over 50 years at MIT, the condensed-matter physicist led the development of photonic crystals, translating discoveries into wide-ranging applications in energy, medicine, and defense.


John “JJ” Joannopoulos, the Francis Wright Davis Professor of Physics at MIT and director of the MIT Institute for Soldier Nanotechnologies (ISN), passed away on Aug. 17. He was 78. 

Joannopoulos was a prolific researcher in the field of theoretical condensed-matter physics, and an early pioneer in the study and application of photonic crystals. Many of his discoveries, in the ways materials can be made to manipulate light, have led to transformative and life-saving technologies, from chip-based optical wave guides, to wireless energy transfer to health-monitoring textiles, to precision light-based surgical tools.

His remarkable career of over 50 years was spent entirely at MIT, where he was known as much for his generous and unwavering mentorship as for his contributions to science. He made a special point to keep up rich and meaningful collaborations with many of his former students and postdocs, dozens of whom have gone on to faculty positions at major universities, and to leadership roles in the public and private sectors. In his five decades at MIT, he made lasting connections across campus, both in service of science, and friendship.

“A scientific giant, inspiring leader, and a masterful communicator, John carried a generous and loving heart,” says Yoel Fink PhD ’00, an MIT professor of materials science and engineering who was Joannopoulos’ former student and a longtime collaborator. “He chose to see the good in people, keeping his mind and heart always open. Asking little for himself, he gave everything in care of others. John lived a life of deep impact and meaning — savoring the details of truth-seeking, achieving rare discoveries and mentoring generations of students to achieve excellence. With warmth, humor, and a never-ending optimism, JJ left an indelible impact on science and on all who had the privilege to know him. Above all, he was a loving husband, father, grandfather, friend, and mentor.”

“In the end, the most remarkable thing about him was his unmatched humanity, his ability to make you feel that you were the most important thing in the world that deserved his attention, no matter who you were,” says Raul Radovitzky, ISN associate director and the Jerome C. Hunsaker Professor in MIT’s Department of Aeronautics and Astronautics. “The legacy he leaves is not only in equations and innovations, but in the lives he touched, the minds he inspired, and the warmth he spread in every room he entered.”

“JJ was a very special colleague: a brilliant theorist who was also adept at identifying practical applications; a caring and inspiring mentor of younger scientists; a gifted teacher who knew every student in his class by name,” says Deepto Chakrabarty ’88, the William A. M. Burden Professor in Astrophysics and head of MIT’s Department of Physics. “He will be deeply missed.”

Layers of light

John Joannopoulos was born in 1947 in New York City, where his parents both emigrated from Greece. His father was a playwright, and his mother worked as a psychologist. From an early age, Joannopoulos knew he wanted to be a physicist — mainly because the subject was his most challenging in school. In a recent interview with MIT News, he enthusiastically shared: “You probably wouldn’t believe this, but it’s true: I wanted to be a physics professor since I was in high school! I loved the idea of being able to work with students, and being able to have ideas.”

He attended the University of California at Berkeley, where he received a bachelor’s degree in 1968, and a PhD in 1974, both in physics. That same year, he joined the faculty at MIT, where he would spend his 50-plus-year career — though at the time, the chances of gaining a long-term foothold at the Institute seemed slim, as Joannopoulos told MIT News.

“The chair of the physics department was the famous nuclear physicist, Herman Feshbach, who told me the probability that I would get tenure was something like 30 percent,” Joannopoulos recalled. “But when you’re young and just starting off, it was certainly better than zero, and I thought, that was fine — there was hope down the line.”

Starting out at MIT, Joannopoulos knew exactly what he wanted to do. He quickly set up a group to study theoretical condensed-matter physics, and specifically, ab initio physics, meaning physics “from first principles.” In this initial work, he sought to build theoretical models to predict the electronic behavior and structure of materials, based solely on the atomic numbers of the atoms in a material. Such foundational models could be applied to understand and design a huge range of materials and structures.

Then, in the early 1990s, Joannopoulos took a research turn, spurred by a paper by physicist Eli Yablonovitch at the University of California at Los Angeles, who did some preliminary work on materials that can affect the behavior of photons, or particles of light. Joannopoulos recognized a connection with his first-principles work with electrons. Along with his students, he applied that approach to predict the fundamental behavior of photons in different classes of materials. His group was one of the first to pioneer the field of photonic crystals, and the study of how materials can be manipulated at the nanoscale to control the behavior of light traveling through. In 1995, Joannopoulos co-authored the first textbook on the subject.

And in 1998, he took on a more-than-century-old assumption about how light should reflect, and turned it on its head. That assumption predicted that light, shining onto a structure made of multiple refractive layers, could reflect back, but only for a limited range of angles. But in fact, Joannopoulos and his group showed that the opposite is true: If the structure’s layers followed a particular design criteria, the structure as a whole could reflect light coming from any and all angles. This structure, was called the “perfect mirror.”

That insight led to another: If the structure were rolled into a tube, the resulting hollow fiber could act as a perfect optical conduit. Any light traveling through the fiber would reflect and bounce around within the fiber, with none scattering away. Joannopoulos and his group applied this insight to develop the first precision “optical scalpel” — a fiber that can be safely handled, while delivering a highly focused laser, precise and powerful enough to perform delicate surgical procedures. Joannopoulos helped to commercialize the new tool with a startup, Omniguide, that has since provided the optical scalpel to assist in hundreds of thousands of medical procedures around the world.

Legendary mentor

In 2006, Joannopoulos took the helm as director of MIT’s Institute for Soldier Nanotechnologies — a post he steadfastly held for almost 20 years. During his dedicated tenure, he worked with ISN members across campus and in departments outside his own, getting to know and champion their work. He has facilitated countless collaborations between MIT faculty, industry partners, and the U.S. Department of Defense. Among the many projects he raised support for were innovations in lightweight armor, hyperspectral imaging, energy-efficient batteries, and smart and responsive fabrics.

Joannopoulos helped to translate many basic science insights into practical applications. He was a cofounder of six spinoff companies based on his fundamental research, and helped to create dozens more companies, which have advanced technologies as wide-ranging as laser surgery tools, to wireless electric power transmission, transparent display technologies, and optical computing. He was awarded 126 patents for his many discoveries, and has authored over 750 peer-reviewed papers.

In recognition of his wide impact and contributions, Joannopoulos was elected to the National Academy of Sciences and the American Academy of Arts and Sciences. He was also a fellow of both the American Physical Society and the American Association for the Advancement of Science. Over his 50-plus-year career, he was the recipient of many scientific awards and honors including the Max Born Award, and the Aneesur Rahman Prize in Computational Physics. Joannopoulos was also a gifted classroom teacher, and was recognized at MIT with the Buechner Teaching Prize in Physics and the Graduate Teaching Award in Science.

This year, Joannopoulos was the recipient of MIT’s Killian Achievement Award, which recognizes the extraordinary lifetime contributions of a member of the MIT faculty. In addition to the many accomplishments Joannopoulos has made in science, the award citation emphasized his lasting impact on the generations of students he has mentored:

“Professor Joannopoulos has served as a legendary mentor to generations of students, inspiring them to achieve excellence in science while at the same time facilitating the practical benefit to society through entrepreneurship,” the citation reads. “Through all of these individuals he has impacted — not to mention their academic descendants — Professor Joannopoulos has had a vast influence on the development of science in recent decades.”

“JJ was an amazing scientist: He published hundreds of papers that have been cited close to 200,000 times. He was also a serial entrepreneur: Companies he cofounded raised hundreds of millions of dollars and employed hundreds of people,” says MIT Professor Marin Soljacic ’96, a former postdoc under Joannopoulos who with him cofounded a startup, Witricity. “He was an amazing mentor, a close friend, and like a scientific father to me. He always had time for me, any time of the day, and as much as I needed.”

Indeed, Joannopoulos strived to meaningfully support his many students. In the classroom, he “was legendary,” says friend and colleague Patrick Lee ’66, PhD ’70, who recalls that Joannopoulos would make a point of memorizing the names and faces of more than 100 students on the first day of class, and calling them each by their first name, starting on the second day, and for the rest of the term.

What’s more, Joannopoulos encouraged graduate students and postdocs to follow their ideas, even when they ran counter to his own.

“John did not produce clones,” says Lee, who is an MIT professor emeritus of physics. “He showed them the way to do science by example, by caring and by sharing his optimism. I have never seen someone so deeply loved by his students.”

Even students who stepped off the photonics path have kept in close contact with their mentor, as former student and MIT professor Josh Winn ’94, SM ’94, PhD ’01 has done.

“Even though our work together ended more than 25 years ago, and I now work in a different field, I still feel like part of the Joannopoulos academic family,” says Winn, who is now a professor of astrophysics at Princeton University. “It's a loyal group with branches all over the world. We even had our own series of conferences, organized by former students to celebrate John's 50th, 60th, and 70th birthdays. Most professors would consider themselves fortunate to have even one such ‘festschrift’ honoring their legacy.”

MIT professor of mathematics Steven Johnson ’95, PhD ’01, a former student and frequent collaborator, has experienced personally, and seen many times over, Joannopoulos’ generous and open-door mentorship.

“In every collaboration, I’ve unfailingly observed him to cast a wide net to value multiple voices, to ensure that everyone feels included and valued, and to encourage collaborations across groups and fields and institutions,” Johnson says. “Kind, generous, and brimming with infectious enthusiasm and positivity, he set an example so many of his lucky students have striven to follow.”

Joannopoulos started at MIT around the same time as Marc Kastner, who had a nearby office on the second floor of Building 13.

“I would often hear loud arguments punctuated by boisterous laughter, coming from John’s office, where he and his students were debating physics,” recalls Kastner, who is the Donner Professor of Physics Emeritus at MIT. “I am sure this style of interaction is what made him such a great mentor.”

“He exuded such enthusiasm for science and good will to others that he was just good fun to be around,” adds friend and colleague Erich Ippen, MIT professor emeritus of physics.

“John was indeed a great man — a very special one. Everyone who ever worked with him understands this,” says Stanford University physics professor Robert Laughlin PhD ’79, one of Joannopoulos’ first graduate students, who went on to win the 1998 Nobel Prize in Physics. “He sprinkled a kind of transformative magic dust on people that induced them to dedicate every waking moment to the task of making new and wonderful things. You can find traces of it in lots of places around the world that matter, all of them the better for it. There’s quite a pile of it in my office.”

Joannopoulos is survived by his wife, Kyri Dunussi-Joannopoulos; their three daughters, Maria, Lena, and Alkisti; and their families. Details for funeral and memorial services are forthcoming.


Researchers glimpse the inner workings of protein language models

A new approach can reveal the features AI models use to predict proteins that might make good drug or vaccine targets.


Within the past few years, models that can predict the structure or function of proteins have been widely used for a variety of biological applications, such as identifying drug targets and designing new therapeutic antibodies.

These models, which are based on large language models (LLMs), can make very accurate predictions of a protein’s suitability for a given application. However, there’s no way to determine how these models make their predictions or which protein features play the most important role in those decisions.

In a new study, MIT researchers have used a novel technique to open up that “black box” and allow them to determine what features a protein language model takes into account when making predictions. Understanding what is happening inside that black box could help researchers to choose better models for a particular task, helping to streamline the process of identifying new drugs or vaccine targets.

“Our work has broad implications for enhanced explainability in downstream tasks that rely on these representations,” says Bonnie Berger, the Simons Professor of Mathematics, head of the Computation and Biology group in MIT’s Computer Science and Artificial Intelligence Laboratory, and the senior author of the study. “Additionally, identifying features that protein language models track has the potential to reveal novel biological insights from these representations.”

Onkar Gujral, an MIT graduate student, is the lead author of the open-access study, which appears this week in the Proceedings of the National Academy of Sciences. Mihir Bafna, an MIT graduate student in electrical engineering and computer science, and Eric Alm, an MIT professor of biological engineering, are also authors of the paper.

Opening the black box

In 2018, Berger and former MIT graduate student Tristan Bepler PhD ’20 introduced the first protein language model. Their model, like subsequent protein models that accelerated the development of AlphaFold, such as ESM2 and OmegaFold, was based on LLMs. These models, which include ChatGPT, can analyze huge amounts of text and figure out which words are most likely to appear together.

Protein language models use a similar approach, but instead of analyzing words, they analyze amino acid sequences. Researchers have used these models to predict the structure and function of proteins, and for applications such as identifying proteins that might bind to particular drugs.

In a 2021 study, Berger and colleagues used a protein language model to predict which sections of viral surface proteins are less likely to mutate in a way that enables viral escape. This allowed them to identify possible targets for vaccines against influenza, HIV, and SARS-CoV-2.

However, in all of these studies, it has been impossible to know how the models were making their predictions.

“We would get out some prediction at the end, but we had absolutely no idea what was happening in the individual components of this black box,” Berger says.

In the new study, the researchers wanted to dig into how protein language models make their predictions. Just like LLMs, protein language models encode information as representations that consist of a pattern of activation of different “nodes” within a neural network. These nodes are analogous to the networks of neurons that store memories and other information within the brain.

The inner workings of LLMs are not easy to interpret, but within the past couple of years, researchers have begun using a type of algorithm known as a sparse autoencoder to help shed some light on how those models make their predictions. The new study from Berger’s lab is the first to use this algorithm on protein language models.

Sparse autoencoders work by adjusting how a protein is represented within a neural network. Typically, a given protein will be represented by a pattern of activation of a constrained number of neurons, for example, 480. A sparse autoencoder will expand that representation into a much larger number of nodes, say 20,000.

When information about a protein is encoded by only 480 neurons, each node lights up for multiple features, making it very difficult to know what features each node is encoding. However, when the neural network is expanded to 20,000 nodes, this extra space along with a sparsity constraint gives the information room to “spread out.” Now, a feature of the protein that was previously encoded by multiple nodes can occupy a single node.

“In a sparse representation, the neurons lighting up are doing so in a more meaningful manner,” Gujral says. “Before the sparse representations are created, the networks pack information so tightly together that it's hard to interpret the neurons.”

Interpretable models

Once the researchers obtained sparse representations of many proteins, they used an AI assistant called Claude (related to the popular Anthropic chatbot of the same name), to analyze the representations. In this case, they asked Claude to compare the sparse representations with the known features of each protein, such as molecular function, protein family, or location within a cell.

By analyzing thousands of representations, Claude can determine which nodes correspond to specific protein features, then describe them in plain English. For example, the algorithm might say, “This neuron appears to be detecting proteins involved in transmembrane transport of ions or amino acids, particularly those located in the plasma membrane.”

This process makes the nodes far more “interpretable,” meaning the researchers can tell what each node is encoding. They found that the features most likely to be encoded by these nodes were protein family and certain functions, including several different metabolic and biosynthetic processes.

“When you train a sparse autoencoder, you aren’t training it to be interpretable, but it turns out that by incentivizing the representation to be really sparse, that ends up resulting in interpretability,” Gujral says.

Understanding what features a particular protein model is encoding could help researchers choose the right model for a particular task, or tweak the type of input they give the model, to generate the best results. Additionally, analyzing the features that a model encodes could one day help biologists to learn more about the proteins that they are studying.

“At some point when the models get a lot more powerful, you could learn more biology than you already know, from opening up the models,” Gujral says.

The research was funded by the National Institutes of Health. 


Planets without water could still produce certain liquids, a new study finds

Lab experiments show “ionic liquids” can form through common planetary processes and might be capable of supporting life even on waterless planets.


Water is essential for life on Earth. So, the liquid must be a requirement for life on other worlds. For decades, scientists’ definition of habitability on other planets has rested on this assumption.

But what makes some planets habitable might have very little to do with water. In fact, an entirely different type of liquid could conceivably support life in worlds where water can barely exist. That’s a possibility that MIT scientists raise in a study appearing this week in the Proceedings of the National Academy of Sciences.

From lab experiments, the researchers found that a type of fluid known as an ionic liquid can readily form from chemical ingredients that are also expected to be found on the surface of some rocky planets and moons. Ionic liquids are salts that exist in liquid form below about 100 degrees Celsius. The team’s experiments showed that a mixture of sulfuric acid and certain nitrogen-containing organic compounds produced such a liquid. On rocky planets, sulfuric acid may be a byproduct of volcanic activity, while nitrogen-containing compounds have been detected on several asteroids and planets in our solar system, suggesting the compounds may be present in other planetary systems.

The scientists propose that, even on planets that are too warm or that have atmospheres are too low-pressure to support liquid water, there could still be pockets of ionic liquid. And where there is liquid, there may be potential for life, though likely not anything that resembles Earth’s water-based beings.

Ionic liquids have extremely low vapor pressure and do not evaporate; they can form and persist at higher temperatures and lower pressures than what liquid water can tolerate. The researchers note that ionic liquid can be a hospitable environment for some biomolecules, such as certain proteins that can remain stable in the fluid.

“We consider water to be required for life because that is what’s needed for Earth life. But if we look at a more general definition, we see that what we need is a liquid in which metabolism for life can take place,” says Rachana Agrawal, who led the study as a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “Now if we include ionic liquid as a possibility, this can dramatically increase the habitability zone for all rocky worlds.”

The study’s MIT co-authors are Sara Seager, the Class of 1941 Professor of Planetary Sciences in the Department of Earth, Atmospheric and Planetary Sciences and a professor in the departments of Physics and of Aeronautics and Astronautics, along with Iaroslav Iakubivskyi, Weston Buchanan, Ana Glidden, and Jingcheng Huang. Co-authors also include Maxwell Seager of Worcester Polytechnic Institute, William Bains of Cardiff University, and Janusz Petkowski of Wroclaw University of Science and Technology, in Poland.

A liquid leap

The team’s work with ionic liquid grew out of an effort to search for signs of life on Venus, where clouds of sulfuric acid envelope the planet in a noxious haze. Despite its toxicity, Venus’ clouds may contain signs of life — a notion that scientists plan to test with upcoming missions to the planet’s atmosphere.

Agrawal and Seager, who is leading the Morning Star Missions to Venus, were investigating ways to collect and evaporate sulfuric acid. If a mission collects samples from Venus’ clouds, sulfuric acid would have to be evaporated away in order to reveal any residual organic compounds that could then be analyzed for signs of life.

The researchers were using their custom, low-pressure system designed to evaporate away excess sulfuric acid, to test evaporation of a solution of the acid and an organic compound, glycine. They found that in every case, while most of the liquid sulfuric acid evaporated, a stubborn layer of liquid always remained. They soon realized that sulfuric acid was chemically reacting with glycine, resulting in an exchange of hydrogen atoms from the acid to the organic compound. The result was a fluid mixture of salts, or ions, known as an ionic liquid, that persists as a liquid across a wide range of temperatures and pressures.

This accidental finding kickstarted an idea: Could ionic liquid form on planets that are too warm and host atmospheres too thin for water to exist?

“From there, we took the leap of imagination of what this could mean,” Agrawal says. “Sulfuric acid is found on Earth from volcanoes, and organic compounds have been found on asteroids and other planetary bodies. So, this led us to wonder if ionic liquids could potentially form and exist naturally on exoplanets.”

Rocky oases

On Earth, ionic liquids are mainly synthesized for industrial purposes. They do not occur naturally, except for in one specific case, in which the liquid is generated from the mixing of venoms produced by two rival species of ants.

The team set out to investigate what conditions ionic liquid could be naturally produced in, and over what range of temperatures and pressures. In the lab, they mixed sulfuric acid with various nitrogen-containing organic compounds. In previous work, Seager’s team had found that the compounds, some of which can be considered ingredients associated with life, are surprisingly stable in sulfuric acid.

“In high school, you learn that an acid wants to donate a proton,” Seager says. “And oddly enough, we knew from our past work with sulfuric acid (the main component of Venus’ clouds) and nitrogen-containing compounds, that a nitrogen wants to receive a hydrogen. It’s like one person’s trash is another person’s treasure.”

The reaction could produce a bit of ionic liquid if the sulfuric acid and nitrogen-containing organics were in a one-to-one ratio — a ratio that was not a focus of the prior work. For their new study, Seager and Agrawal mixed sulfuric acid with over 30 different nitrogen-containing organic compounds, across a range of temperatures and pressures, then observed whether ionic liquid formed when they evaporated away the sulfuric acid in various vials. They also mixed the ingredients onto basalt rocks, which are known to exist on the surface of many rocky planets.

Three chunks of rock

The team found that the reactions produced ionic liquid at temperatures up to 180 degrees Celsius and at extremely low pressures — much lower than that of the Earth’s atmosphere. Their results suggest that ionic liquid could naturally form on other planets where liquid water cannot exist, under the right conditions.

“We were just astonished that the ionic liquid forms under so many different conditions,” Seager says. “If you put the sulfuric acid and the organic on a rock, the excess sulfuric acid seeps into the rock pores, but you’re still left with a drop of ionic liquid on the rock. Whatever we tried, ionic liquid still formed.”

“We’re envisioning a planet warmer than Earth, that doesn’t have water, and at some point in its past or currently, it has to have had sulfuric acid, formed from volcanic outgassing,” Seager says. “This sulfuric acid has to flow over a little pocket of organics. And organic deposits are extremely common in the solar system.”

Then, she says, the resulting pockets of liquid could stay on the planet’s surface, potentially for years or millenia, where they could theoretically serve as small oases for simple forms of ionic-liquid-based life. Going forward, Seager’s team plans to investigate further, to see what biomolecules, and ingredients for life, might survive, and thrive, in ionic liquid.

“We just opened up a Pandora’s box of new research,” Seager says. “It’s been a real journey.”

This research was supported, in part, by the Sloan Foundation and the Volkswagen Foundation.


AI helps chemists develop tougher plastics

Researchers created polymers that are more resistant to tearing by incorporating stress-responsive molecules identified by a machine-learning model.


A new strategy for strengthening polymer materials could lead to more durable plastics and cut down on plastic waste, according to researchers at MIT and Duke University.

Using machine learning, the researchers identified crosslinker molecules that can be added to polymer materials, allowing them to withstand more force before tearing. These crosslinkers belong to a class of molecules known as mechanophores, which change their shape or other properties in response to mechanical force.

“These molecules can be useful for making polymers that would be stronger in response to force. You apply some stress to them, and rather than cracking or breaking, you instead see something that has higher resilience,” says Heather Kulik, the Lammot du Pont Professor of Chemical Engineering at MIT, who is also a professor of chemistry and the senior author of the study.

The crosslinkers that the researchers identified in this study are iron-containing compounds known as ferrocenes, which until now had not been broadly explored for their potential as mechanophores. Experimentally evaluating a single mechanophore can take weeks, but the researchers showed that they could use a machine-learning model to dramatically speed up this process.

MIT postdoc Ilia Kevlishvili is the lead author of the open-access paper, which appeared Friday in ACS Central Science. Other authors include Jafer Vakil, a Duke graduate student; David Kastner and Xiao Huang, both MIT graduate students; and Stephen Craig, a professor of chemistry at Duke.

The weakest link

Mechanophores are molecules that respond to force in unique ways, typically by changing their color, structure, or other properties. In the new study, the MIT and Duke team wanted to investigate whether they could be used to help make polymers more resilient to damage.

The new work builds on a 2023 study from Craig and Jeremiah Johnson, the A. Thomas Guertin Professor of Chemistry at MIT, and their colleagues. In that work, the researchers found that, surprisingly, incorporating weak crosslinkers into a polymer network can make the overall material stronger. When materials with these weak crosslinkers are stretched to the breaking point, any cracks propagating through the material try to avoid the stronger bonds and go through the weaker bonds instead. This means the crack has to break more bonds than it would if all of the bonds were the same strength.

To find new ways to exploit that phenomenon, Craig and Kulik joined forces to try to identify mechanophores that could be used as weak crosslinkers.

“We had this new mechanistic insight and opportunity, but it came with a big challenge: Of all possible compositions of matter, how do we zero in on the ones with the greatest potential?” Craig says. “Full credit to Heather and Ilia for both identifying this challenge and devising an approach to meet it.”

Discovering and characterizing mechanophores is a difficult task that requires either time-consuming experiments or computationally intense simulations of molecular interactions. Most of the known mechanophores are organic compounds, such as cyclobutane, which was used as a crosslinker in the 2023 study.

In the new study, the researchers wanted to focus on molecules known as ferrocenes, which are believed to hold potential as mechanophores. Ferrocenes are organometallic compounds that have an iron atom sandwiched between two carbon-containing rings. Those rings can have different chemical groups added to them, which alter their chemical and mechanical properties.

Many ferrocenes are used as pharmaceuticals or catalysts, and a handful are known to be good mechanophores, but most have not been evaluated for that use. Experimental tests on a single potential mechanophore can take several weeks, and computational simulations, while faster, still take a couple of days. Evaluating thousands of candidates using these strategies is a daunting task.

Realizing that a machine-learning approach could dramatically speed up the characterization of these molecules, the MIT and Duke team decided to use a neural network to identify ferrocenes that could be promising mechanophores.

They began with information from a database known as the Cambridge Structural Database, which contains the structures of 5,000 different ferrocenes that have already been synthesized.

“We knew that we didn’t have to worry about the question of synthesizability, at least from the perspective of the mechanophore itself. This allowed us to pick a really large space to explore with a lot of chemical diversity, that also would be synthetically realizable,” Kevlishvili says.

First, the researchers performed computational simulations for about 400 of these compounds, allowing them to calculate how much force is necessary to pull atoms apart within each molecule. For this application, they were looking for molecules that would break apart quickly, as these weak links could make polymer materials more resistant to tearing.

Then they used this data, along with information on the structure of each compound, to train a machine-learning model. This model was able to predict the force needed to activate the mechanophore, which in turn influences resistance to tearing, for the remaining 4,500 compounds in the database, plus an additional 7,000 compounds that are similar to those in the database but have some atoms rearranged.

The researchers discovered two main features that seemed likely to increase tear resistance. One was interactions between the chemical groups that are attached to the ferrocene rings. Additionally, the presence of large, bulky molecules attached to both rings of the ferrocene made the molecule more likely to break apart in response to applied forces.

While the first of these features was not surprising, the second trait was not something a chemist would have predicted beforehand, and could not have been detected without AI, the researchers say. “This was something truly surprising,” Kulik says.

Tougher plastics

Once the researchers identified about 100 promising candidates, Craig’s lab at Duke synthesized a polymer material incorporating one of them, known as m-TMS-Fc. Within the material, m-TMS-Fc acts as a crosslinker, connecting the polymer strands that make up polyacrylate, a type of plastic.

By applying force to each polymer until it tore, the researchers found that the weak m-TMS-Fc linker produced a strong, tear-resistant polymer. This polymer turned out to be about four times tougher than polymers made with standard ferrocene as the crosslinker.

“That really has big implications because if we think of all the plastics that we use and all the plastic waste accumulation, if you make materials tougher, that means their lifetime will be longer. They will be usable for a longer period of time, which could reduce plastic production in the long term,” Kevlishvili says.

The researchers now hope to use their machine-learning approach to identify mechanophores with other desirable properties, such as the ability to change color or become catalytically active in response to force. Such materials could be used as stress sensors or switchable catalysts, and they could also be useful for biomedical applications such as drug delivery.

In those studies, the researchers plan to focus on ferrocenes and other metal-containing mechanophores that have already been synthesized but whose properties are not fully understood.

“Transition metal mechanophores are relatively underexplored, and they’re probably a little bit more challenging to make,” Kulik says. “This computational workflow can be broadly used to enlarge the space of mechanophores that people have studied.”

The research was funded by the National Science Foundation Center for the Chemistry of Molecularly Optimized Networks (MONET).


MIT tool visualizes and edits “physically impossible” objects

By visualizing Escher-like optical illusions in 2.5 dimensions, the “Meschers” tool could help scientists understand physics-defying shapes and spark new designs.


M.C. Escher’s artwork is a gateway into a world of depth-defying optical illusions, featuring “impossible objects” that break the laws of physics with convoluted geometries. What you perceive his illustrations to be depends on your point of view — for example, a person seemingly walking upstairs may be heading down the steps if you tilt your head sideways

Computer graphics scientists and designers can recreate these illusions in 3D, but only by bending or cutting a real shape and positioning it at a particular angle. This workaround has downsides, though: Changing the smoothness or lighting of the structure will expose that it isn’t actually an optical illusion, which also means you can’t accurately solve geometry problems on it.

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a unique approach to represent “impossible” objects in a more versatile way. Their “Meschers” tool converts images and 3D models into 2.5-dimensional structures, creating Escher-like depictions of things like windows, buildings, and even donuts. The approach helps users relight, smooth out, and study unique geometries while preserving their optical illusion.

This tool could assist geometry researchers with calculating the distance between two points on a curved impossible surface (“geodesics”) and simulating how heat dissipates over it (“heat diffusion”). It could also help artists and computer graphics scientists create physics-breaking designs in multiple dimensions.

Lead author and MIT PhD student Ana Dodik aims to design computer graphics tools that aren’t limited to replicating reality, enabling artists to express their intent independently of whether a shape can be realized in the physical world. “Using Meschers, we’ve unlocked a new class of shapes for artists to work with on the computer,” she says. “They could also help perception scientists understand the point at which an object truly becomes impossible.”

Dodik and her colleagues will present their paper at the SIGGRAPH conference in August.

Making impossible objects possible

Impossible objects can’t be fully replicated in 3D. Their constituent parts often look plausible, but these parts don’t glue together properly when assembled in 3D. But what can be computationally imitated, as the CSAIL researchers found out, is the process of how we perceive these shapes.

Take the Penrose Triangle, for instance. The object as a whole is physically impossible because the depths don’t “add up,” but we can recognize real-world 3D shapes (like its three L-shaped corners) within it. These smaller regions can be realized in 3D — a property called “local consistency” — but when we try to assemble them together, they don’t form a globally consistent shape.

The Meschers approach models’ locally consistent regions without forcing them to be globally consistent, piecing together an Escher-esque structure. Behind the scenes, Meschers represents impossible objects as if we know their x and y coordinates in the image, as well as differences in z coordinates (depth) between neighboring pixels; the tool uses these differences in depth to reason about impossible objects indirectly.

The many uses of Meschers

In addition to rendering impossible objects, Meschers can subdivide their structures into smaller shapes for more precise geometry calculations and smoothing operations. This process enabled the researchers to reduce visual imperfections of impossible shapes, such as a red heart outline they thinned out.

The researchers also tested their tool on an “impossibagel,” where a bagel is shaded in a physically impossible way. Meschers helped Dodik and her colleagues simulate heat diffusion and calculate geodesic distances between different points of the model.

“Imagine you’re an ant traversing this bagel, and you want to know how long it’ll take you to get across, for example,” says Dodik. “In the same way, our tool could help mathematicians analyze the underlying geometry of impossible shapes up close, much like how we study real-world ones.”

Much like a magician, the tool can create optical illusions out of otherwise practical objects, making it easier for computer graphics artists to create impossible objects. It can also use “inverse rendering” tools to convert drawings and images of impossible objects into high-dimensional designs. 

“Meschers demonstrates how computer graphics tools don’t have to be constrained by the rules of physical reality,” says senior author Justin Solomon, associate professor of electrical engineering and computer science and leader of the CSAIL Geometric Data Processing Group. “Incredibly, artists using Meschers can reason about shapes that we will never find in the real world.”

Meschers can also aid computer graphics artists with tweaking the shading of their creations, while still preserving an optical illusion. This versatility would allow creatives to change the lighting of their art to depict a wider variety of scenes (like a sunrise or sunset) — as Meschers demonstrated by relighting a model of a dog on a skateboard.

Despite its versatility, Meschers is just the start for Dodik and her colleagues. The team is considering designing an interface to make the tool easier to use while building more elaborate scenes. They’re also working with perception scientists to see how the computer graphics tool can be used more broadly.

Dodik and Solomon wrote the paper with CSAIL affiliates Isabella Yu ’24, SM ’25; PhD student Kartik Chandra SM ’23; MIT professors Jonathan Ragan-Kelley and Joshua Tenenbaum; and MIT Assistant Professor Vincent Sitzmann. 

Their work was supported, in part, by the MIT Presidential Fellowship, the Mathworks Fellowship, the Hertz Foundation, the U.S. National Science Foundation, the Schmidt Sciences AI2050 fellowship, MIT Quest for Intelligence, the U.S. Army Research Office, U.S. Air Force Office of Scientific Research, SystemsThatLearn@CSAIL initiative, Google, the MIT–IBM Watson AI Laboratory, from the Toyota–CSAIL Joint Research Center, Adobe Systems, the Singapore Defence Science and Technology Agency, and the U.S. Intelligence Advanced Research Projects Activity.


Ultrasmall optical devices rewrite the rules of light manipulation

Nanophotonic devices developed at MIT are compact, efficient, reprogrammable, adaptive, and able to dynamically respond to external inputs.


In the push to shrink and enhance technologies that control light, MIT researchers have unveiled a new platform that pushes the limits of modern optics through nanophotonics, the manipulation of light on the nanoscale, or billionths of a meter.

The result is a class of ultracompact optical devices that are not only smaller and more efficient than existing technologies, but also dynamically tunable, or switchable, from one optical mode to another. Until now, this has been an elusive combination in nanophotonics.

The work is reported in the July 8 issue of Nature Photonics.

“This work marks a significant step toward a future in which nanophotonic devices are not only compact and efficient, but also reprogrammable and adaptive, capable of dynamically responding to external inputs. The  marriage of emerging quantum materials and established nanophotonics architectures will surely bring advances to both fields,” says Riccardo Comin, MIT’s Class of 1947 Career Development Associate Professor of Physics and leader of the work. Comin is also affiliated with MIT’s Materials Research Laboratory and Research Laboratory of Electronics (RLE).

Comin’s colleagues on the work are Ahmet Kemal Demir, an MIT graduate student in physics; Luca Nessi, a former MIT postdoc who is now a postdoc at Politecnico di Milano; Sachin Vaidya, a postdoc in RLE; Connor A. Occhialini PhD ’24, who is now a postdoc at Columbia University; and Marin Soljačić, the Cecil and Ida Green Professor of Physics at MIT.

Demir and Nessi are co-first authors of the Nature Photonics paper.

Toward new nanophotonic materials

Nanophotonics has traditionally relied on materials like silicon, silicon nitride, or titanium dioxide. These are the building blocks of devices that guide and confine light using structures such as waveguides, resonators, and photonic crystals. The latter are periodic arrangements of materials that control how light propagates, much like how a semiconductor crystal affects electron motion.

While highly effective, these materials are constrained by two major limitations. The first involves their refractive indices. These are a measure of how strongly a material interacts with light; the higher the refractive index, the more the material “grabs” or interacts with the light, bending it more sharply and slowing it down more. The refractive indices of silicon and other traditional nanophotonic materials are often modest, which limits how tightly light can be confined and how small optical devices can be made.

A second major limitation of traditional nanophotonic materials: once a structure is fabricated, its optical behavior is essentially fixed. There is usually no way to significantly reconfigure how it responds to light without physically altering it. “Tunability is essential for many next-gen photonics applications, enabling adaptive imaging, precision sensing, reconfigurable light sources, and trainable optical neural networks,” says Vaidya.

Introducing chromium sulfide bromide

These are the longstanding challenges that chromium sulfide bromide (CrSBr) is poised to solve. CrSBr is a layered quantum material with a rare combination of magnetic order and strong optical response. Central to its unique optical properties are excitons: quasiparticles formed when a material absorbs light and an electron is excited, leaving behind a positively charged “hole.” The electron and hole remain bound together by electrostatic attraction, forming a sort of neutral particle that can strongly interact with light.

In CrSBr, excitons dominate the optical response and are highly sensitive to magnetic fields, which means they can be manipulated using external controls.

Because of these excitons, CrSBr exhibits an exceptionally large refractive index that allows researchers to sculpt the material to fabricate optical structures like photonic crystals that are up to an order of magnitude thinner than those made from traditional materials. “We can make optical structures as thin as 6 nanometers, or just seven layers of atoms stacked on top of each other,” says Demir.

And crucially, by applying a modest magnetic field, the MIT researchers were able to continuously and reversibly switch the optical mode. In other words, they demonstrated the ability to dynamically change how light flows through the nanostructure, all without any moving parts or changes in temperature. “This degree of control is enabled by a giant, magnetically induced shift in the refractive index, far beyond what is typically achievable in established photonic materials,” says Demir.

In fact, the interaction between light and excitons in CrSBr is so strong that it leads to the formation of polaritons, hybrid light-matter particles that inherit properties from both components. These polaritons enable new forms of photonic behavior, such as enhanced nonlinearities and new regimes of quantum light transport. And unlike conventional systems that require external optical cavities to reach this regime, CrSBr supports polaritons intrinsically.

While this demonstration uses standalone CrSBr flakes, the material can also be integrated into existing photonic platforms, such as integrated photonic circuits. This makes CrSBr immediately relevant to real-world applications, where it can serve as a tunable layer or component in otherwise passive devices.

The MIT results were achieved at very cold temperatures of up to 132 kelvins (-222 degrees Fahrenheit). Although this is below room temperature, there are compelling use cases, such as quantum simulation, nonlinear optics, and reconfigurable polaritonic platforms, where the unparalleled tunability of CrSBr could justify operation in cryogenic environments.

In other words, says Demir, “CrSBr is so unique with respect to other common materials that even going down to cryogenic temperatures will be worth the trouble, hopefully.”

That said, the team is also exploring related materials with higher magnetic ordering temperatures to enable similar functionality at more accessible conditions.

This work was supported by the U.S. Department of Energy, the U.S. Army Research Office, and a MathWorks Science Fellowship. The work was performed in part at MIT.nano.


How the brain distinguishes oozing fluids from solid objects

A new study finds parts of the brain’s visual cortex are specialized to analyze either solid objects or flowing materials like water or sand.


Imagine a ball bouncing down a flight of stairs. Now think about a cascade of water flowing down those same stairs. The ball and the water behave very differently, and it turns out that your brain has different regions for processing visual information about each type of physical matter.

In a new study, MIT neuroscientists have identified parts of the brain’s visual cortex that respond preferentially when you look at “things” — that is, rigid or deformable objects like a bouncing ball. Other brain regions are more activated when looking at “stuff” — liquids or granular substances such as sand.

This distinction, which has never been seen in the brain before, may help the brain plan how to interact with different kinds of physical materials, the researchers say.

“When you’re looking at some fluid or gooey stuff, you engage with it in different way than you do with a rigid object. With a rigid object, you might pick it up or grasp it, whereas with fluid or gooey stuff, you probably are going to have to use a tool to deal with it,” says Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience; a member of the McGovern Institute for Brain Research and MIT’s Center for Brains, Minds, and Machines; and the senior author of the study.

MIT postdoc Vivian Paulun, who is joining the faculty of the University of Wisconsin at Madison this fall, is the lead author of the paper, which appears today in the journal Current Biology. RT Pramod, an MIT postdoc, and Josh Tenenbaum, an MIT professor of brain and cognitive sciences, are also authors of the study.

Stuff vs. things

Decades of brain imaging studies, including early work by Kanwisher, have revealed regions in the brain’s ventral visual pathway that are involved in recognizing the shapes of 3D objects, including an area called the lateral occipital complex (LOC). A region in the brain’s dorsal visual pathway, known as the frontoparietal physics network (FPN), analyzes the physical properties of materials, such as mass or stability.

Although scientists have learned a great deal about how these pathways respond to different features of objects, the vast majority of these studies have been done with solid objects, or “things.”

“Nobody has asked how we perceive what we call ‘stuff’ — that is, liquids or sand, honey, water, all sorts of gooey things. And so we decided to study that,” Paulun says.

These gooey materials behave very differently from solids. They flow rather than bounce, and interacting with them usually requires containers and tools such as spoons. The researchers wondered if these physical features might require the brain to devote specialized regions to interpreting them.

To explore how the brain processes these materials, Paulun used a software program designed for visual effects artists to create more than 100 video clips showing different types of things or stuff interacting with the physical environment. In these videos, the materials could be seen sloshing or tumbling inside a transparent box, being dropped onto another object, or bouncing or flowing down a set of stairs.

The researchers used functional magnetic resonance imaging (fMRI) to scan the visual cortex of people as they watched the videos. They found that both the LOC and the FPN respond to “things” and “stuff,” but that each pathway has distinctive subregions that respond more strongly to one or the other.

“Both the ventral and the dorsal visual pathway seem to have this subdivision, with one part responding more strongly to ‘things,’ and the other responding more strongly to ‘stuff,’” Paulun says. “We haven’t seen this before because nobody has asked that before.”

Roland Fleming, a professor of experimental psychology at Justus Liebig University of Geissen, described the findings as a “major breakthrough in the scientific understanding of how our brains represent the physical properties of our surrounding world.”

“We’ve known the distinction exists for a long time psychologically, but this is the first time that it’s been really mapped onto separate cortical structures in the brain. Now we can investigate the different computations that the distinct brain regions use to process and represent objects and materials,” says Fleming, who was not involved in the study.

Physical interactions

The findings suggest that the brain may have different ways of representing these two categories of material, similar to the artificial physics engines that are used to create video game graphics. These engines usually represent a 3D object as a mesh, while fluids are represented as sets of particles that can be rearranged.

“The interesting hypothesis that we can draw from this is that maybe the brain, similar to artificial game engines, has separate computations for representing and simulating ‘stuff’ and ‘things.’ And that would be something to test in the future,” Paulun says.

The researchers also hypothesize that these regions may have developed to help the brain understand important distinctions that allow it to plan how to interact with the physical world. To further explore this possibility, the researchers plan to study whether the areas involved in processing rigid objects are also active when a brain circuit involved in planning to grasp objects is active.

They also hope to look at whether any of the areas within the FPN correlate with the processing of more specific features of materials, such as the viscosity of liquids or the bounciness of objects. And in the LOC, they plan to study how the brain represents changes in the shape of fluids and deformable substances.

The research was funded by the German Research Foundation, the U.S. National Institutes of Health, and a U.S. National Science Foundation grant to the Center for Brains, Minds, and Machines.


Mapping cells in time and space: New tool reveals a detailed history of tumor growth

Researchers developed a tool to recreate cells’ family trees. Comparing cells’ lineages and locations within a tumor provided insights into factors shaping tumor growth.


All life is connected in a vast family tree. Every organism exists in relationship to its ancestors, descendants, and cousins, and the path between any two individuals can be traced. The same is true of cells within organisms — each of the trillions of cells in the human body is produced through successive divisions from a fertilized egg, and can all be related to one another through a cellular family tree. In simpler organisms, such as the worm C. elegans, this cellular family tree has been fully mapped, but the cellular family tree of a human is many times larger and more complex.

In the past, MIT professor and Whitehead Institute for Biomedical Research member Jonathan Weissman and other researchers developed lineage tracing methods to track and reconstruct the family trees of cell divisions in model organisms in order to understand more about the relationships between cells and how they assemble into tissues, organs, and — in some cases — tumors. These methods could help to answer many questions about how organisms develop and diseases like cancer are initiated and progress.

Now, Weissman and colleagues have developed an advanced lineage tracing tool that not only captures an accurate family tree of cell divisions, but also combines that with spatial information: identifying where each cell ends up within a tissue. The researchers used their tool, PEtracer, to observe the growth of metastatic tumors in mice. Combining lineage tracing and spatial data provided the researchers with a detailed view of how elements intrinsic to the cancer cells and from their environments influenced tumor growth, as Weissman and postdocs in his lab Luke Koblan, Kathryn Yost, and Pu Zheng, and graduate student William Colgan share in a paper published in the journal Science on July 24.

“Developing this tool required combining diverse skill sets through the sort of ambitious interdisciplinary collaboration that’s only possible at a place like Whitehead Institute,” says Weissman, who is also a Howard Hughes Medical Institute investigator. “Luke came in with an expertise in genetic engineering, Pu in imaging, Katie in cancer biology, and William in computation, but the real key to their success was their ability to work together to build PEtracer.”

“Understanding how cells move in time and space is an important way to look at biology, and here we were able to see both of those things in high resolution. The idea is that by understanding both a cell’s past and where it ends up, you can see how different factors throughout its life influenced its behaviors. In this study, we use these approaches to look at tumor growth, though in principle we can now begin to apply these tools to study other biology of interest, like embryonic development,” Koblan says.

Designing a tool to track cells in space and time

PEtracer tracks cells’ lineages by repeatedly adding short, predetermined codes to the DNA of cells over time. Each piece of code, called a lineage tracing mark, is made up of five bases, the building blocks of DNA. These marks are inserted using a gene editing technology called prime editing, which directly rewrites stretches of DNA with minimal undesired byproducts. Over time, each cell acquires more lineage tracing marks, while also maintaining the marks of its ancestors. The researchers can then compare cells’ combinations of marks to figure out relationships and reconstruct the family tree.

“We used computational modeling to design the tool from first principles, to make sure that it was highly accurate, and compatible with imaging technology. We ran many simulations to land on the optimal parameters for a new lineage tracing tool, and then engineered our system to fit those parameters,” Colgan says.

When the tissue — in this case, a tumor growing in the lung of a mouse — had sufficiently grown, the researchers collected these tissues and used advanced imaging approaches to look at each cell’s lineage relationship to other cells via the lineage tracing marks, along with its spatial position within the imaged tissue and its identity (as determined by the levels of different RNAs expressed in each cell). PEtracer is compatible with both imaging approaches and sequencing methods that capture genetic information from single cells.

“Making it possible to collect and analyze all of this data from the imaging was a large challenge,” Zheng says. “What’s particularly exciting to me is not just that we were able to collect terabytes of data, but that we designed the project to collect data that we knew we could use to answer important questions and drive biological discovery.”

Reconstructing the history of a tumor

Combining the lineage tracing, gene expression, and spatial data let the researchers understand how the tumor grew. They could tell how closely related neighboring cells are and compare their traits. Using this approach, the researchers found that the tumors they were analyzing were made up of four distinct modules, or neighborhoods, of cells.

The tumor cells closest to the lung, the most nutrient-dense region, were the most fit, meaning their lineage history indicated the highest rate of cell division over time. Fitness in cancer cells tends to correlate to how aggressively tumors will grow.

The cells at the “leading edge” of the tumor, the far side from the lung, were more diverse and not as fit. Below the leading edge was a low-oxygen neighborhood of cells that might once have been leading edge cells, now trapped in a less-desirable spot. Between these cells and the lung-adjacent cells was the tumor core, a region with both living and dead cells, as well as cellular debris.

The researchers found that cancer cells across the family tree were equally likely to end up in most of the regions, with the exception of the lung-adjacent region, where a few branches of the family tree dominated. This suggests that the cancer cells’ differing traits were heavily influenced by their environments, or the conditions in their local neighborhoods, rather than their family history. Further evidence of this point was that expression of certain fitness-related genes, such as Fgf1/Fgfbp1, correlated to a cell’s location, rather than its ancestry. However, lung-adjacent cells also had inherited traits that gave them an edge, including expression of the fitness-related gene Cldn4­ — showing that family history influenced outcomes as well.

These findings demonstrate how cancer growth is influenced both by factors intrinsic to certain lineages of cancer cells and by environmental factors that shape the behavior of cancer cells exposed to them.

“By looking at so many dimensions of the tumor in concert, we could gain insights that would not have been possible with a more limited view,” Yost says. “Being able to characterize different populations of cells within a tumor will enable researchers to develop therapies that target the most aggressive populations more effectively.”

“Now that we’ve done the hard work of designing the tool, we’re excited to apply it to look at all sorts of questions in health and disease, in embryonic development, and across other model species, with an eye toward understanding important problems in human health,” Koblan says. “The data we collect will also be useful for training AI models of cellular behavior. We’re excited to share this technology with other researchers and see what we all can discover.”


Famous double-slit experiment holds up when stripped to its quantum essentials

MIT physicists confirm that, like Superman, light has two identities that are impossible to see at once.


MIT physicists have performed an idealized version of one of the most famous experiments in quantum physics. Their findings demonstrate, with atomic-level precision, the dual yet evasive nature of light. They also happen to confirm that Albert Einstein was wrong about this particular quantum scenario.

The experiment in question is the double-slit experiment, which was first performed in 1801 by the British scholar Thomas Young to show how light behaves as a wave. Today, with the formulation of quantum mechanics, the double-slit experiment is now known for its surprisingly simple demonstration of a head-scratching reality: that light exists as both a particle and a wave. Stranger still, this duality cannot be simultaneously observed. Seeing light in the form of particles instantly obscures its wave-like nature, and vice versa.

The original experiment involved shining a beam of light through two parallel slits in a screen and observing the pattern that formed on a second, faraway screen. One might expect to see two overlapping spots of light, which would imply that light exists as particles, a.k.a. photons, like paintballs that follow a direct path. But instead, the light produces alternating bright and dark stripes on the screen, in an interference pattern similar to what happens when two ripples in a pond meet. This suggests light behaves as a wave. Even weirder, when one tries to measure which slit the light is traveling through, the light suddenly behaves as particles and the interference pattern disappears.

The double-slit experiment is taught today in most high school physics classes as a simple way to illustrate the fundamental principle of quantum mechanics: that all physical objects, including light, are simultaneously particles and waves.

Nearly a century ago, the experiment was at the center of a friendly debate between physicists Albert Einstein and Niels Bohr. In 1927, Einstein argued that a photon particle should pass through just one of the two slits and in the process generate a slight force on that slit, like a bird rustling a leaf as it flies by. He proposed that one could detect such a force while also observing an interference pattern, thereby catching light’s particle and wave nature at the same time. In response, Bohr applied the quantum mechanical uncertainty principle and showed that the detection of the photon’s path would wash out the interference pattern.

Scientists have since carried out multiple versions of the double-slit experiment, and they have all, to various degrees, confirmed the validity of the quantum theory formulated by Bohr. Now, MIT physicists have performed the most “idealized” version of the double-slit experiment to date. Their version strips down the experiment to its quantum essentials. They used individual atoms as slits, and used weak beams of light so that each atom scattered at most one photon. By preparing the atoms in different quantum states, they were able to modify what information the atoms obtained about the path of the photons. The researchers thus confirmed the predictions of quantum theory: The more information was obtained about the path (i.e. the particle nature) of light, the lower the visibility of the interference pattern was. 

They demonstrated what Einstein got wrong. Whenever an atom is “rustled” by a passing photon, the wave interference is diminished.

“Einstein and Bohr would have never thought that this is possible, to perform such an experiment with single atoms and single photons,” says Wolfgang Ketterle, the John D. MacArthur Professor of Physics and leader of the MIT team. “What we have done is an idealized Gedanken experiment.”

Their results appear in the journal Physical Review Letters. Ketterle’s MIT co-authors include first author Vitaly Fedoseev, Hanzhen Lin, Yu-Kun Lu, Yoo Kyung Lee, and Jiahao Lyu, who all are affiliated with MIT’s Department of Physics, the Research Laboratory of Electronics, and the MIT-Harvard Center for Ultracold Atoms.

Cold confinement

Ketterle’s group at MIT experiments with atoms and molecules that they super-cool to temperatures just above absolute zero and arrange in configurations that they confine with laser light. Within these ultracold, carefully tuned clouds, exotic phenomena that only occur at the quantum, single-atom scale can emerge.

In a recent experiment, the team was investigating a seemingly unrelated question, studying how light scattering can reveal the properties of materials built from ultracold atoms.

“We realized we can quantify the degree to which this scattering process is like a particle or a wave, and we quickly realized we can apply this new method to realize this famous experiment in a very idealized way,” Fedoseev says.

In their new study, the team worked with more than 10,000 atoms, which they cooled to microkelvin temperatures. They used an array of laser beams to arrange the frozen atoms into an evenly spaced, crystal-like lattice configuration. In this arrangement, each atom is far enough away from any other atom that each can effectively be considered a single, isolated and identical atom. And 10,000 such atoms can produce a signal that is more easily detected, compared to a single atom or two.

The group reasoned that with this arrangement, they might shine a weak beam of light through the atoms and observe how a single photon scatters off two adjacent atoms, as a wave or a particle. This would be similar to how, in the original double-slit experiment, light passes through two slits.

“What we have done can be regarded as a new variant to the double-slit experiment,” Ketterle says. “These single atoms are like the smallest slits you could possibly build.”

Tuning fuzz

Working at the level of single photons required repeating the experiment many times and using an ultrasensitive detector to record the pattern of light scattered off the atoms. From the intensity of the detected light, the researchers could directly infer whether the light behaved as a particle or a wave.

They were particularly interested in the situation where half the photons they sent in behaved as waves, and half behaved as particles. They achieved this by using a method to tune the probability that a photon will appear as a wave versus a particle, by adjusting an atom’s “fuzziness,” or the certainty of its location. In their experiment, each of the 10,000 atoms is held in place by laser light that can be adjusted to tighten or loosen the light’s hold. The more loosely an atom is held, the fuzzier, or more “spatially extensive,” it appears. The fuzzier atom rustles more easily and records the path of the photon. Therefore, in tuning up an atom’s fuzziness, researchers can increase the probability that a photon will exhibit particle-like behavior. Their observations were in full agreement with the theoretical description.

Springs away

In their experiment, the group tested Einstein’s idea about how to detect the path of the photon. Conceptually, if each slit were cut into an extremely thin sheet of paper that was suspended in the air by a spring, a photon passing through one slit should shake the corresponding spring by a certain degree that would be a signal of the photon’s particle nature. In previous realizations of the double slit experiment, physicists have incorporated such a spring-like ingredient, and the spring played a major role in describing the photon’s dual nature.

But Ketterle and his colleagues were able to perform the experiment without the proverbial springs. The team’s cloud of atoms is initially held in place by laser light, similar to Einstein’s conception of a slit suspended by a spring. The researchers reasoned that if they were to do away with their “spring,” and observe exactly the same phenomenon, then it would show that the spring has no effect on a photon’s wave/particle duality.

This, too, was what they found. Over multiple runs, they turned off the spring-like laser holding the atoms in place and then quickly took a measurement in a millionth of a second,  before the atoms became more fuzzy and eventually fell down due to gravity. In this tiny amount of time, the atoms were effectively floating in free space. In this spring-free scenario, the team observed the same phenomenon: A photon’s wave and particle nature could not be observed simultaneously.

“In many descriptions, the springs play a major role. But we show, no, the springs do not matter here; what matters is only the fuzziness of the atoms,” Fedoseev says. “Therefore,  one has to use a more profound description, which uses quantum correlations between photons and atoms.”

The researchers note that the year 2025 has been declared by the United Nations as the International Year of Quantum Science and Technology, celebrating the formulation of quantum mechanics 100 years ago. The discussion between Bohr and Einstein about the double-slit experiment took place only two years later.

“It’s a wonderful coincidence that we could help clarify this historic controversy in the same year we celebrate quantum physics,” says co-author Lee.

This work was supported, in part, by the National Science Foundation, the U.S. Department of Defense, and the Gordon and Betty Moore Foundation.


Astronomers discover star-shredding black holes hiding in dusty galaxies

Unlike active galaxies that constantly pull in surrounding material, these black holes lie dormant, waking briefly to feast on a passing star.


Astronomers at MIT, Columbia University, and elsewhere have used NASA’s James Webb Space Telescope (JWST) to peer through the dust of nearby galaxies and into the aftermath of a black hole’s stellar feast.

In a study appearing today in Astrophysical Journal Letters, the researchers report that for the first time, JWST has observed several tidal disruption events — instances when a galaxy’s central black hole draws in a nearby star and whips up tidal forces that tear the star to shreds, giving off an enormous burst of energy in the process.

Scientists have observed about 100 tidal disruption events (TDEs) since the 1990s, mostly as X-ray or optical light that flashes across relatively dust-free galaxies. But as MIT researchers recently reported, there may be many more star-shredding events in the universe that are “hiding” in dustier, gas-veiled galaxies.

In their previous work, the team found that most of the X-ray and optical light that a TDE gives off can be obscured by a galaxy’s dust, and therefore can go unseen by traditional X-ray and optical telescopes. But that same burst of light can heat up the surrounding dust and generate a new signal, in the form of infrared light.

Now, the same researchers have used JWST — the world’s most powerful infrared detector — to study signals from four dusty galaxies where they suspect tidal disruption events have occurred. Within the dust, JWST detected clear fingerprints of black hole accretion, a process by which material, such as stellar debris, circles and eventually falls into a black hole. The telescope also detected patterns that are strikingly different from the dust that surrounds active galaxies, where the central black hole is constantly pulling in surrounding material.

Together, the observations confirm that a tidal disruption event did indeed occur in each of the four galaxies. What’s more, the researchers conclude that the four events were products of not active black holes but rather dormant ones, which experienced little to no activity until a star happened to pass by.

The new results highlight JWST’s potential to study in detail otherwise hidden tidal disruption events. They are also helping scientists to reveal key differences in the environments around active versus dormant black holes.

“These are the first JWST observations of tidal disruption events, and they look nothing like what we’ve ever seen before,” says lead author Megan Masterson, a graduate student in MIT’s Kavli Institute for Astrophysics and Space Research. “We’ve learned these are indeed powered by black hole accretion, and they don’t look like environments around normal active black holes. The fact that we’re now able to study what that dormant black hole environment actually looks like is an exciting aspect.”

The study’s MIT authors include Christos Panagiotou, Erin Kara, Anna-Christina Eilers, along with Kishalay De of Columbia University and collaborators from multiple other institutions.

Seeing the light

The new study expands on the team’s previous work using another infrared detector — NASA’s Near-Earth Object Wide-field Infrared Survey Explorer (NEOWISE) mission. Using an algorithm developed by co-author Kishalay De of Columbia University, the team searched through a decade’s worth of data from the telescope, looking for infrared “transients,” or short peaks of infrared activity from otherwise quiet galaxies that could be signals of a black hole briefly waking up and feasting on a passing star. That search unearthed about a dozen signals that the group determined were likely produced by a tidal disruption event.

“With that study, we found these 12 sources that look just like TDEs,” Masterson says. “We made a lot of arguments about how the signals were very energetic, and the galaxies didn’t look like they were active before, so the signals must have been from a sudden TDE. But except for these little pieces, there was no direct evidence.”

With the much more sensitive capabilities of JWST, the researchers hoped to discern key “spectral lines,” or infrared light at specific wavelengths, that would be clear fingerprints of conditions associated with a tidal disruption event.

“With NEOWISE, it’s as if our eyes could only see red light or blue light, whereas with JWST, we’re seeing the full rainbow,” Masterson says.

A Bonafide signal

In their new work, the group looked specifically for a peak in infrared, that could only be produced by black hole accretion — a process by which material is drawn toward a black hole in a circulating disk of gas. This disk produces an enormous amount of radiation that is so intense that it can kick out electrons from individual atoms. In particular, such accretion processes can blast several electrons out from atoms of neon, and the resulting ion can transition, releasing infrared radiation at a very specific wavelength that JWST can detect. 

“There’s nothing else in the universe that can excite this gas to these energies, except for black hole accretion,” Masterson says.

The researchers searched for this smoking-gun signal in four of the 12 TDE candidates they previously identified. The four signals include: the closest tidal disruption event detected to date, located in a galaxy some 130 million light years away; a TDE that also exhibits a burst of X-ray light; a signal that may have been produced by gas circulating at incredibly high speeds around a central black hole; and a signal that also included an optical flash, which scientists had previously suspected to be a supernova, or the collapse of a dying star, rather than tidal disruption event.

“These four signals were as close as we could get to a sure thing,” Masterson says. “But the JWST data helped us say definitively these are bonafide TDEs.”

When the team pointed JWST toward the galaxies of each of the four signals, in a program designed by De, they observed that the telltale spectral lines showed up in all four sources. These measurements confirmed that black hole accretion occurred in all four galaxies. But the question remained: Was this accretion a temporary feature, triggered by a tidal disruption and a black hole that briefly woke up to feast on a passing star? Or was this accretion a more permanent trait of “active” black holes that are always on? In the case of the latter, it would be less likely that a tidal disruption event had occurred.

To differentiate between the two possibilities, the team used the JWST data to detect another wavelength of infrared light, which indicates the presence of silicates, or dust in the galaxy. They then mapped this dust in each of the four galaxies and compared the patterns to those of active galaxies, which are known to harbor clumpy, donut-shaped dust clouds around the central black hole. Masterson observed that all four sources showed very different patterns compared to typical active galaxies, suggesting that the black hole at the center of each of the galaxies is not normally active, but dormant. If an accretion disk formed around such a black hole, the researchers conclude that it must have been a result of a tidal disruption event.

“Together, these observations say the only thing these flares could be are TDEs,” Masterson says.

She and her collaborators plan to uncover many more previously hidden tidal disruption events, with NEOWISE, JWST, and other infrared telescopes. With enough detections, they say TDEs can serve as effective probes of black hole properties. For instance, how much of a star is shredded, and how fast its debris is accreted and consumed, can reveal fundamental properties of a black hole, such as how massive it is and how fast it spins.

“The actual process of a black hole gobbling down all that stellar material takes a long time,” Masterson says. “It’s not an instantaneous process. And hopefully we can start to probe how long that process takes and what that environment looks like. No one knows because we just started discovering and studying these events.”

This research was supported, in part, by NASA.