General news from the MIT - Massachusetts Institute of Technology University

Here you find the recent daily general news of the the MIT - Massachusetts Institute of Technology University

MIT News
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
MIT researchers develop AI tool to improve flu vaccine strain selection

VaxSeer uses machine learning to predict virus evolution and antigenicity, aiming to make vaccine selection more accurate and less reliant on guesswork.


Every year, global health experts are faced with a high-stakes decision: Which influenza strains should go into the next seasonal vaccine? The choice must be made months in advance, long before flu season even begins, and it can often feel like a race against the clock. If the selected strains match those that circulate, the vaccine will likely be highly effective. But if the prediction is off, protection can drop significantly, leading to (potentially preventable) illness and strain on health care systems.

This challenge became even more familiar to scientists in the years during the Covid-19 pandemic. Think back to the time (and time and time again), when new variants emerged just as vaccines were being rolled out. Influenza behaves like a similar, rowdy cousin, mutating constantly and unpredictably. That makes it hard to stay ahead, and therefore harder to design vaccines that remain protective.

To reduce this uncertainty, scientists at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the MIT Abdul Latif Jameel Clinic for Machine Learning in Health set out to make vaccine selection more accurate and less reliant on guesswork. They created an AI system called VaxSeer, designed to predict dominant flu strains and identify the most protective vaccine candidates, months ahead of time. The tool uses deep learning models trained on decades of viral sequences and lab test results to simulate how the flu virus might evolve and how the vaccines will respond.

Traditional evolution models often analyze the effect of single amino acid mutations independently. “VaxSeer adopts a large protein language model to learn the relationship between dominance and the combinatorial effects of mutations,” explains Wenxian Shi, a PhD student in MIT’s Department of Electrical Engineering and Computer Science, researcher at CSAIL, and lead author of a new paper on the work. “Unlike existing protein language models that assume a static distribution of viral variants, we model dynamic dominance shifts, making it better suited for rapidly evolving viruses like influenza.”

An open-access report on the study was published today in Nature Medicine.

The future of flu

VaxSeer has two core prediction engines: one that estimates how likely each viral strain is to spread (dominance), and another that estimates how effectively a vaccine will neutralize that strain (antigenicity). Together, they produce a predicted coverage score: a forward-looking measure of how well a given vaccine is likely to perform against future viruses.

The scale of the score could be from an infinite negative to 0. The closer the score to 0, the better the antigenic match of vaccine strains to the circulating viruses. (You can imagine it as the negative of some kind of “distance.”)

In a 10-year retrospective study, the researchers evaluated VaxSeer’s recommendations against those made by the World Health Organization (WHO) for two major flu subtypes: A/H3N2 and A/H1N1. For A/H3N2, VaxSeer’s choices outperformed the WHO’s in nine out of 10 seasons, based on retrospective empirical coverage scores (a surrogate metric of the vaccine effectiveness, calculated from the observed dominance from past seasons and experimental HI test results). The team used this to evaluate vaccine selections, as the effectiveness is only available for vaccines actually given to the population. 

For A/H1N1, it outperformed or matched the WHO in six out of 10 seasons. In one notable case, for the 2016 flu season, VaxSeer identified a strain that wasn’t chosen by the WHO until the following year. The model’s predictions also showed strong correlation with real-world vaccine effectiveness estimates, as reported by the CDC, Canada’s Sentinel Practitioner Surveillance Network, and Europe’s I-MOVE program. VaxSeer’s predicted coverage scores aligned closely with public health data on flu-related illnesses and medical visits prevented by vaccination.

So how exactly does VaxSeer make sense of all these data? Intuitively, the model first estimates how rapidly a viral strain spreads over time using a protein language model, and then determines its dominance by accounting for competition among different strains.

Once the model has calculated its insights, they’re plugged into a mathematical framework based on something called ordinary differential equations to simulate viral spread over time. For antigenicity, the system estimates how well a given vaccine strain will perform in a common lab test called the hemagglutination inhibition assay. This measures how effectively antibodies can inhibit the virus from binding to human red blood cells, which is a widely used proxy for antigenic match/antigenicity. 

Outpacing evolution

“By modeling how viruses evolve and how vaccines interact with them, AI tools like VaxSeer could help health officials make better, faster decisions — and stay one step ahead in the race between infection and immunity,” says Shi. 

VaxSeer currently focuses only on the flu virus’s HA (hemagglutinin) protein,the major antigen of influenza. Future versions could incorporate other proteins like NA (neuraminidase), and factors like immune history, manufacturing constraints, or dosage levels. Applying the system to other viruses would also require large, high-quality datasets that track both viral evolution and immune responses — data that aren’t always publicly available. The team, however is currently working on the methods that can predict viral evolution in low-data regimes building on relations between viral families

“Given the speed of viral evolution, current therapeutic development often lags behind. VaxSeer is our attempt to catch up,” says Regina Barzilay, the School of Engineering Distinguished Professor for AI and Health at MIT, AI lead of Jameel Clinic, and CSAIL principal investigator. 

“This paper is impressive, but what excites me perhaps even more is the team’s ongoing work on predicting viral evolution in low-data settings,” says Assistant Professor Jon Stokes of the Department of Biochemistry and Biomedical Sciences at McMaster University in Hamilton, Ontario. “The implications go far beyond influenza. Imagine being able to anticipate how antibiotic-resistant bacteria or drug-resistant cancers might evolve, both of which can adapt rapidly. This kind of predictive modeling opens up a powerful new way of thinking about how diseases change, giving us the opportunity to stay one step ahead and design clinical interventions before escape becomes a major problem.”

Shi and Barzilay wrote the paper with MIT CSAIL postdoc Jeremy Wohlwend ’16, MEng ’17, PhD ’25 and recent CSAIL affiliate Menghua Wu ’19, MEng ’20, PhD ’25. Their work was supported, in part, by the U.S. Defense Threat Reduction Agency and MIT Jameel Clinic.


New self-assembling material could be the key to recyclable EV batteries

MIT researchers designed an electrolyte that can break apart at the end of a battery’s life, allowing for easier recycling of components.


Today’s electric vehicle boom is tomorrow’s mountain of electronic waste. And while myriad efforts are underway to improve battery recycling, many EV batteries still end up in landfills.

A research team from MIT wants to help change that with a new kind of self-assembling battery material that quickly breaks apart when submerged in a simple organic liquid. In a new paper published in Nature Chemistry, the researchers showed the material can work as the electrolyte in a functioning, solid-state battery cell and then revert back to its original molecular components in minutes.

The approach offers an alternative to shredding the battery into a mixed, hard-to-recycle mass. Instead, because the electrolyte serves as the battery’s connecting layer, when the new material returns to its original molecular form, the entire battery disassembles to accelerate the recycling process.

“So far in the battery industry, we’ve focused on high-performing materials and designs, and only later tried to figure out how to recycle batteries made with complex structures and hard-to-recycle materials,” says the paper’s first author Yukio Cho PhD ’23. “Our approach is to start with easily recyclable materials and figure out how to make them battery-compatible. Designing batteries for recyclability from the beginning is a new approach.”

Joining Cho on the paper are PhD candidate Cole Fincher, Ty Christoff-Tempesta PhD ’22, Kyocera Professor of Ceramics Yet-Ming Chiang, Visiting Associate Professor Julia Ortony, Xiaobing Zuo, and Guillaume Lamour.

Better batteries

There’s a scene in one of the “Harry Potter” films where Professor Dumbledore cleans a dilapidated home with the flick of the wrist and a spell. Cho says that image stuck with him as a kid. (What better way to clean your room?) When he saw a talk by Ortony on engineering molecules so that they could assemble into complex structures and then revert back to their original form, he wondered if it could be used to make battery recycling work like magic.

That would be a paradigm shift for the battery industry. Today, batteries require harsh chemicals, high heat, and complex processing to recycle. There are three main parts of a battery: the positively charged cathode, the negatively charged electrode, and the electrolyte that shuttles lithium ions between them. The electrolytes in most lithium-ion batteries are highly flammable and degrade over time into toxic byproducts that require specialized handling.

To simplify the recycling process, the researchers decided to make a more sustainable electrolyte. For that, they turned to a class of molecules that self-assemble in water, named aramid amphiphiles (AAs), whose chemical structures and stability mimic that of Kevlar. The researchers further designed the AAs to contain polyethylene glycol (PEG), which can conduct lithium ions, on one end of each molecule. When the molecules are exposed to water, they spontaneously form nanoribbons with ion-conducting PEG surfaces and bases that imitate the robustness of Kevlar through tight hydrogen bonding. The result is a mechanically stable nanoribbon structure that conducts ions across its surface.

“The material is composed of two parts,” Cho explains. “The first part is this flexible chain that gives us a nest, or host, for lithium ions to jump around. The second part is this strong organic material component that is used in the Kevlar, which is a bulletproof material. Those make the whole structure stable.”

When added to water, the nanoribbons self-assemble to form millions of nanoribbons that can be hot-pressed into a solid-state material.

“Within five minutes of being added to water, the solution becomes gel-like, indicating there are so many nanofibers formed in the liquid that they start to entangle each other,” Cho says. “What’s exciting is we can make this material at scale because of the self-assembly behavior.”

The team tested the material’s strength and toughness, finding it could endure the stresses associated with making and running the battery. They also constructed a solid-state battery cell that used lithium iron phosphate for the cathode and lithium titanium oxide as the anode, both common materials in today’s batteries. The nanoribbons moved lithium ions successfully between the electrodes, but a side-effect known as polarization limited the movement of lithium ions into the battery’s electrodes during fast bouts of charging and discharging, hampering its performance compared to today’s gold-standard commercial batteries.

“The lithium ions moved along the nanofiber all right, but getting the lithium ion from the nanofibers to the metal oxide seems to be the most sluggish point of the process,” Cho says.

When they immersed the battery cell into organic solvents, the material immediately dissolved, with each part of the battery falling away for easier recycling. Cho compared the materials’ reaction to cotton candy being submerged in water.

“The electrolyte holds the two battery electrodes together and provides the lithium-ion pathways,” Cho says. “So, when you want to recycle the battery, the entire electrolyte layer can fall off naturally and you can recycle the electrodes separately.”

Validating a new approach

Cho says the material is a proof of concept that demonstrates the recycle-first approach.

“We don’t want to say we solved all the problems with this material,” Cho says. “Our battery performance was not fantastic because we used only this material as the entire electrolyte for the paper, but what we’re picturing is using this material as one layer in the battery electrolyte. It doesn’t have to be the entire electrolyte to kick off the recycling process.”

Cho also sees a lot of room for optimizing the material’s performance with further experiments.

Now, the researchers are exploring ways to integrate these kinds of materials into existing battery designs as well as implementing the ideas into new battery chemistries.

“It’s very challenging to convince existing vendors to do something very differently,” Cho says. “But with new battery materials that may come out in five or 10 years, it could be easier to integrate this into new designs in the beginning.”

Cho also believes the approach could help reshore lithium supplies by reusing materials from batteries that are already in the U.S.

“People are starting to realize how important this is,” Cho says. “If we can start to recycle lithium-ion batteries from battery waste at scale, it’ll have the same effect as opening lithium mines in the U.S. Also, each battery requires a certain amount of lithium, so extrapolating out the growth of electric vehicles, we need to reuse this material to avoid massive lithium price spikes.”

The work was supported, in part, by the National Science Foundation and the U.S. Department of Energy.


Why countries trade with each other while fighting

Mariya Grinberg’s new book, “Trade in War,” examines the curious phenomenon of economic trade during military conflict.


In World War II, Britain was fighting for its survival against German aerial bombardment. Yet Britain was importing dyes from Germany at the same time. This sounds curious, to put it mildly. How can two countries at war with each other also be trading goods?

Examples of this abound, actually. Britain also traded with its enemies for almost all of World War I. India and Pakistan conducted trade with each other during the First Kashmir War, from 1947 to 1949, and during the India-Pakistan War of 1965. Croatia and then-Yugoslavia traded with each other while fighting in 1992.

“States do in fact trade with their enemies during wars,” says MIT political scientist Mariya Grinberg. “There is a lot of variation in which products get traded, and in which wars, and there are differences in how long trade lasts into a war. But it does happen.”

Indeed, as Grinberg has found, state leaders tend to calculate whether trade can give them an advantage by boosting their own economies while not supplying their enemies with anything too useful in the near term.

“At its heart, wartime trade is all about the tradeoff between military benefits and economic costs,” Grinberg says. “Severing trade denies the enemy access to your products that could increase their military capabilities, but it also incurs a cost to you because you’re losing trade and neutral states could take over your long-term market share.” Therefore, many countries try trading with their wartime foes.

Grinberg explores this topic in a groundbreaking new book, the first one on the subject, “Trade in War: Economic Cooperation Across Enemy Lines,” published this month by Cornell University Press. It is also the first book by Grinberg, an assistant professor of political science at MIT.

Calculating time and utility

“Trade in War” has its roots in research Grinberg started as a doctoral student at the University of Chicago, where she noticed that wartime trade was a phenomenon not yet incorporated into theories of state behavior.

Grinberg wanted to learn about it comprehensively, so, as she quips, “I did what academics usually do: I went to the work of historians and said, ‘Historians, what have you got for me?’”

Modern wartime trading began during the Crimean War, which pitted Russia against France, Britain, the Ottoman Empire, and other allies. Before the war’s start in 1854, France had paid for many Russian goods that could not be shipped because ice in the Baltic Sea was late to thaw. To rescue its produce, France then persuaded Britain and Russia to adopt “neutral rights,” codified in the 1856 Declaration of Paris, which formalized the idea that goods in wartime could be shipped via neutral parties (sometimes acting as intermediaries for warring countries).

“This mental image that everyone has, that we don’t trade with our enemies during war, is actually an artifact of the world without any neutral rights,” Grinberg says. “Once we develop neutral rights, all bets are off, and now we have wartime trade.”

Overall, Grinberg’s systematic analysis of wartime trade shows that it needs to be understood on the level of particular goods. During wartime, states calculate how much it would hurt their own economies to stop trade of certain items; how useful specific products would be to enemies during war, and in what time frame; and how long a war is going to last.

“There are two conditions under which we can see wartime trade,” Grinberg says. “Trade is permitted when it does not help the enemy win the war, and it’s permitted when ending it would damage the state’s long-term economic security, beyond the current war.”

Therefore a state might export diamonds, knowing an adversary would need to resell such products over time to finance any military activities. Conversely, states will not trade products that can quickly convert into military use.

“The tradeoff is not the same for all products,” Grinberg says. “All products can be converted into something of military utility, but they vary in how long that takes. If I’m expecting to fight a short war, things that take a long time for my opponent to convert into military capabilities won’t help them win the current war, so they’re safer to trade.” Moreover, she adds, “States tend to prioritize maintaining their long-term economic stability, as long as the stakes don’t hit too close to home.”

This calculus helps explain some seemingly inexplicable wartime trade decisions. In 1917, three years into World War I, Germany started trading dyes to Britain. As it happens, dyes have military uses, for example as coatings for equipment. And World War I, infamously, was lasting far beyond initial expectations. But as of 1917, German planners thought the introduction of unrestricted submarine warfare would bring the war to a halt in their favor within a few months, so they approved the dye exports. That calculation was wrong, but it fits the framework Grinberg has developed.

States: Usually wrong about the length of wars

“Trade in War” has received praise from other scholars in the field. Michael Mastanduno of Dartmouth College has said the book “is a masterful contribution to our understanding of how states manage trade-offs across economics and security in foreign policy.”

For her part, Grinberg notes that her work holds multiple implications for international relations — one being that trade relationships do not prevent hostilities from unfolding, as some have theorized.

“We can’t expect even strong trade relations to deter a conflict,” Grinberg says. “On the other hand, when we learn our assumptions about the world are not necessarily correct, we can try to find different levers to deter war.”

Grinberg has also observed that states are not good, by any measure, at projecting how long they will be at war.

“States very infrequently get forecasts about the length of war right,” Grinberg says. That fact has formed the basis of a second, ongoing Grinberg book project.

“Now I’m studying why states go to war unprepared, why they think their wars are going to end quickly,” Grinberg says. “If people just read history, they will learn almost all of human history works against this assumption.”

At the same time, Grinberg thinks there is much more that scholars could learn specifically about trade and economic relations among warring countries — and hopes her book will spur additional work on the subject.

“I’m almost certain that I’ve only just begun to scratch the surface with this book,” she says. 


Locally produced proteins help mitochondria function

Researchers developed an approach to study where proteins get made, and characterized proteins produced near mitochondria, gaining potential insights into mitochondrial function and disease.


Our cells produce a variety of proteins, each with a specific role that, in many cases, means that they need to be in a particular part of the cell where that role is needed. One of the ways that cells ensure certain proteins end up in the right location at the right time is through localized translation, a process that ensures that proteins are made — or translated — close to where they will be needed. MIT professor of biology and Whitehead Institute for Biomedical Research member Jonathan Weissman and colleagues have studied localized translation in order to understand how it affects cell functions and allows cells to quickly respond to changing conditions.

Now, Weissman, who is also a Howard Hughes Medical Institute Investigator, and postdoc in his lab Jingchuan Luo have expanded our knowledge of localized translation at mitochondria, structures that generate energy for the cell. In an open-access paper published today in Cell, they share a new tool, LOCL-TL, for studying localized translation in close detail, and describe the discoveries it enabled about two classes of proteins that are locally translated at mitochondria.

The importance of localized translation at mitochondria relates to their unusual origin. Mitochondria were once bacteria that lived within our ancestors’ cells. Over time, the bacteria lost their autonomy and became part of the larger cells, which included migrating most of their genes into the larger cell’s genome in the nucleus. Cells evolved processes to ensure that proteins needed by mitochondria that are encoded in genes in the larger cell’s genome get transported to the mitochondria. Mitochondria retain a few genes in their own genome, so production of proteins from the mitochondrial genome and that of the larger cell’s genome must be coordinated to avoid mismatched production of mitochondrial parts. Localized translation may help cells to manage the interplay between mitochondrial and nuclear protein production — among other purposes.

How to detect local protein production

For a protein to be made, genetic code stored in DNA is read into RNA, and then the RNA is read or translated by a ribosome, a cellular machine that builds a protein according to the RNA code. Weissman’s lab previously developed a method to study localized translation by tagging ribosomes near a structure of interest, and then capturing the tagged ribosomes in action and observing the proteins they are making. This approach, called proximity-specific ribosome profiling, allows researchers to see what proteins are being made where in the cell. The challenge that Luo faced was how to tweak this method to capture only ribosomes at work near mitochondria.

Ribosomes work quickly, so a ribosome that gets tagged while making a protein at the mitochondria can move on to making other proteins elsewhere in the cell in a matter of minutes. The only way researchers can guarantee that the ribosomes they capture are still working on proteins made near the mitochondria is if the experiment happens very quickly.

Weissman and colleagues had previously solved this time sensitivity problem in yeast cells with a ribosome-tagging tool called BirA that is activated by the presence of the molecule biotin. BirA is fused to the cellular structure of interest, and tags ribosomes it can touch — but only once activated. Researchers keep the cell depleted of biotin until they are ready to capture the ribosomes, to limit the time when tagging occurs. However, this approach does not work with mitochondria in mammalian cells because they need biotin to function normally, so it cannot be depleted.

Luo and Weissman adapted the existing tool to respond to blue light instead of biotin. The new tool, LOV-BirA, is fused to the mitochondrion’s outer membrane. Cells are kept in the dark until the researchers are ready. Then they expose the cells to blue light, activating LOV-BirA to tag ribosomes. They give it a few minutes and then quickly extract the ribosomes. This approach proved very accurate at capturing only ribosomes working at mitochondria.

The researchers then used a method originally developed by the Weissman lab to extract the sections of RNA inside of the ribosomes. This allows them to see exactly how far along in the process of making a protein the ribosome is when captured, which can reveal whether the entire protein is made at the mitochondria, or whether it is partly produced elsewhere and only gets completed at the mitochondria.

“One advantage of our tool is the granularity it provides,” Luo says. “Being able to see what section of the protein is locally translated helps us understand more about how localized translation is regulated, which can then allow us to understand its dysregulation in disease and to control localized translation in future studies.”

Two protein groups are made at mitochondria

Using these approaches, the researchers found that about 20 percent of the genes needed in mitochondria that are located in the main cellular genome are locally translated at mitochondria. These proteins can be divided into two distinct groups with different evolutionary histories and mechanisms for localized translation.

One group consists of relatively long proteins, each containing more than 400 amino acids or protein building blocks. These proteins tend to be of bacterial origin — present in the ancestor of mitochondria — and they are locally translated in both mammalian and yeast cells, suggesting that their localized translation has been maintained through a long evolutionary history.

Like many mitochondrial proteins encoded in the nucleus, these proteins contain a mitochondrial targeting sequence (MTS), a ZIP code that tells the cell where to bring them. The researchers discovered that most proteins containing an MTS also contain a nearby inhibitory sequence that prevents transportation until they are done being made. This group of locally translated proteins lacks the inhibitory sequence, so they are brought to the mitochondria during their production.

Production of these longer proteins begins anywhere in the cell, and then after approximately the first 250 amino acids are made, they get transported to the mitochondria. While the rest of the protein gets made, it is simultaneously fed into a channel that brings it inside the mitochondrion. This ties up the channel for a long time, limiting import of other proteins, so cells can only afford to do this simultaneous production and import for select proteins. The researchers hypothesize that these bacterial-origin proteins are given priority as an ancient mechanism to ensure that they are accurately produced and placed within mitochondria.

The second locally translated group consists of short proteins, each less than 200 amino acids long. These proteins are more recently evolved, and correspondingly, the researchers found that the mechanism for their localized translation is not shared by yeast. Their mitochondrial recruitment happens at the RNA level. Two sequences within regulatory sections of each RNA molecule that do not encode the final protein instead code for the cell’s machinery to recruit the RNAs to the mitochondria.

The researchers searched for molecules that might be involved in this recruitment, and identified the RNA binding protein AKAP1, which exists at mitochondria. When they eliminated AKAP1, the short proteins were translated indiscriminately around the cell. This provided an opportunity to learn more about the effects of localized translation, by seeing what happens in its absence. When the short proteins were not locally translated, this led to the loss of various mitochondrial proteins, including those involved in oxidative phosphorylation, our cells’ main energy generation pathway.

In future research, Weissman and Luo will delve deeper into how localized translation affects mitochondrial function and dysfunction in disease. The researchers also intend to use LOCL-TL to study localized translation in other cellular processes, including in relation to embryonic development, neural plasticity, and disease.

“This approach should be broadly applicable to different cellular structures and cell types, providing many opportunities to understand how localized translation contributes to biological processes,” Weissman says. “We’re particularly interested in what we can learn about the roles it may play in diseases including neurodegeneration, cardiovascular diseases, and cancers.”


SHASS announces appointments of new program and section heads for 2025-26

Sandy Alexandre, Manduhai Buyandelger, and Eden Medina take on new leadership positions.


The MIT School of Humanities, Arts, and Social Sciences announced leadership changes in three of its academic units for the 2025-26 academic year.

“We have an excellent cohort of leaders coming in,” says Agustín Rayo, the Kenan Sahin Dean of the School of Humanities, Arts, and Social Sciences. “I very much look forward to working with them and welcoming them into the school's leadership team.”

Sandy Alexandre will serve as head of MIT Literature. Alexandre is an associate professor of literature and served as co-head of the section in 2024-25. Her research spans the late 19th-century to present-day Black American literature and culture. Her first book, “The Properties of Violence: Claims to Ownership in Representations of Lynching,” uses the history of American lynching violence as a framework to understand matters concerning displacement, property ownership, and the American pastoral ideology in a literary context. Her work thoughtfully explores how literature envisions ecologies of people, places, and objects as recurring echoes of racial violence, resonating across the long arc of U.S. history. She earned a bachelor’s degree in English language and literature from Dartmouth College and a master’s and PhD in English from the University of Virginia.

Manduhai Buyandelger will serve as director of the Program in Women’s and Gender Studies. A professor of anthropology, Buyandelger’s research seeks to find solutions for achieving more-integrated (and less-violent) lives for humans and non-humans by examining the politics of multi-species care and exploitation, urbanization, and how diverse material and spiritual realities interact and shape the experiences of different beings. By examining urban multi-species coexistence in different places in Mongolia, the United States, Japan, and elsewhere, her study probes possibilities for co-cultivating an integrated multi-species existence. She is also developing an anthro-engineering project with the MIT Department of Nuclear Science and Engineering (NSE) to explore pathways to decarbonization in Mongolia by examining user-centric design and responding to political and cultural constraints on clean-energy issues. She offers a transdisciplinary course with NSE, 21A.S01 (Anthro-Engineering: Decarbonization at the Million Person Scale), in collaboration with her colleagues in Mongolia’s capital, Ulaanbaatar. She has written two books on religion, gender, and politics in post-socialist Mongolia: “Tragic Spirits: Shamanism, Gender, and Memory in Contemporary Mongolia” (University of Chicago Press, 2013) and “A Thousand Steps to the Parliament: Constructing Electable Women in Mongolia” (University of Chicago Press, 2022). Her essays have appeared in American Ethnologist, Journal of Royal Anthropological Association, Inner Asia, and Annual Review of Anthropology. She earned a BA in literature and linguistics and an MA in philology from the National University of Mongolia, and a PhD in social anthropology from Harvard University.

Eden Medina PhD ’05 will serve as head of the Program in Science, Technology, and Society. A professor of science, technology, and society, Medina studies the relationship of science, technology, and processes of political change in Latin America. She is the author of “Cybernetic Revolutionaries: Technology and Politics in Allende's Chile” (MIT Press, 2011), which won the 2012 Edelstein Prize for best book on the history of technology and the 2012 Computer History Museum Prize for best book on the history of computing. Her co-edited volume “Beyond Imported Magic: Essays on Science, Technology, and Society in Latin America” (MIT Press, 2014) received the Amsterdamska Award from the European Society for the Study of Science and Technology (2016). In addition to her writings, Medina co-curated the exhibition “How to Design a Revolution: The Chilean Road to Design,” which opened in 2023 at the Centro Cultural La Moneda in Santiago, Chile, and is currently on display at the design museum Disseny Hub in Barcelona, Spain. She holds a PhD in the history and social study of science and technology from MIT and a master’s degree in studies of law from Yale Law School. She worked as an electrical engineer prior to starting her graduate studies.


Fikile Brushett named director of MIT chemical engineering practice school

Brushett leads one-of-its-kind program that has been a bridge between education and industry for over a century.


Fikile R. Brushett, a Ralph Landau Professor of Chemical Engineering Practice, was named director of MIT’s David H. Koch School of Chemical Engineering Practice, effective July 1. In this role, Brushett will lead one of MIT’s most innovative and distinctive educational programs.

Brushett joined the chemical engineering faculty in 2012 and has been a deeply engaged member of the department. An internationally recognized leader in the field of energy storage, his research advances the science and engineering of electrochemical technologies for a sustainable energy economy. He is particularly interested in the fundamental processes that define the performance, cost, and lifetime of present-day and next-generation electrochemical systems. In addition to his research, Brushett has served as a first-year undergraduate advisor, as a member of the department’s graduate admissions committee, and on MIT’s Committee on the Undergraduate Program.

“Fik’s scholarly excellence and broad service position him perfectly to take on this new challenge,” says Kristala L. J. Prather, the Arthur D. Little Professor and head of the Department of Chemical Engineering (ChemE). “His role as practice school director reflects not only his technical expertise, but his deep commitment to preparing students for meaningful, impactful careers. I’m confident he will lead the practice school with the same spirit of excellence and innovation that has defined the program for generations.”

Brushett succeeds T. Alan Hatton, a Ralph Landau Professor of Chemical Engineering Practice Post-Tenure, who directed the practice school for 36 years. For many, Hatton’s name is synonymous with the program. When he became director in 1989, only a handful of major chemical companies hosted stations.

“I realized that focusing on one industry segment was not sustainable and did not reflect the breadth of a chemical engineering education,” Hatton recalls. “So I worked to modernize the experience for students and have it reflect the many ways chemical engineers practice in the modern world.”

Under Hatton’s leadership, the practice school expanded globally and across industries, providing students with opportunities to work on diverse technologies in a wide range of locations. He pioneered the model of recruiting new companies each year, allowing many more firms to participate while also spreading costs across a broader sponsor base. He also introduced an intensive, hands-on project management course at MIT during Independent Activities Period, which has become a valuable complement to students’ station work and future careers.

Value for students and industry

The practice school benefits not only students, but also the companies that host them. By embedding teams directly into manufacturing plants and R&D centers, businesses gain fresh perspectives on critical technical challenges, coupled with the analytical rigor of MIT-trained problem-solvers. Many sponsors report that projects completed by practice school students have yielded measurable cost savings, process improvements, and even new opportunities for product innovation.

For manufacturing industries, where efficiency, safety, and sustainability are paramount, the program provides actionable insights that help companies strengthen competitiveness and accelerate growth. The model creates a unique partnership: students gain true real-world training, while companies benefit from MIT expertise and the creativity of the next generation of chemical engineers.

A century of hands-on learning

Founded in 1916 by MIT chemical engineering alumnus Arthur D. Little and Professor William Walker, with funding from George Eastman of Eastman Kodak, the practice school was designed to add a practical dimension to chemical engineering education. The first five sites — all in the Northeast — focused on traditional chemical industries working on dyes, abrasives, solvents, and fuels.

Today, the program remains unique in higher education. Students consult with companies worldwide across fields ranging from food and pharmaceuticals to energy and finance, tackling some of industry’s toughest challenges. More than a hundred years after its founding, the practice school continues to embody MIT’s commitment to hands-on, problem-driven learning that transforms both students and the industries they serve.

The practice school experience is part of ChemE’s MSCEP and PhD/ScDCEP programs. After coursework for each program is completed, a student attends practice school stations at host company sites. A group of six to 10 students spends two months each at two stations; each station experience includes teams of two or three students working on a month-long project, where they will prepare formal talks, scope of work, and a final report for the host company. Recent stations include Evonik in Marl, Germany; AstraZeneca in Gaithersburg, Maryland; EGA in Dubai, UAE; AspenTech in Bedford, Massachusetts; and Shell Technology Center and Dimensional Energy in Houston, Texas.


New method could monitor corrosion and cracking in a nuclear reactor

By directly imaging material failure in 3D, this real-time technique could help scientists improve reactor safety and longevity.


MIT researchers have developed a technique that enables real-time, 3D monitoring of corrosion, cracking, and other material failure processes inside a nuclear reactor environment.

This could allow engineers and scientists to design safer nuclear reactors that also deliver higher performance for applications like electricity generation and naval vessel propulsion.

During their experiments, the researchers utilized extremely powerful X-rays to mimic the behavior of neutrons interacting with a material inside a nuclear reactor.

They found that adding a buffer layer of silicon dioxide between the material and its substrate, and keeping the material under the X-ray beam for a longer period of time, improves the stability of the sample. This allows for real-time monitoring of material failure processes.

By reconstructing 3D image data on the structure of a material as it fails, researchers could design more resilient materials that can better withstand the stress caused by irradiation inside a nuclear reactor.

“If we can improve materials for a nuclear reactor, it means we can extend the life of that reactor. It also means the materials will take longer to fail, so we can get more use out of a nuclear reactor than we do now. The technique we’ve demonstrated here allows to push the boundary in understanding how materials fail in real-time,” says Ericmoore Jossou, who has shared appointments in the Department of Nuclear Science and Engineering (NSE), where he is the John Clark Hardwick Professor, and the Department of Electrical Engineering and Computer Science (EECS), and the MIT Schwarzman College of Computing.

Jossou, senior author of a study on this technique, is joined on the paper by lead author David Simonne, an NSE postdoc; Riley Hultquist, a graduate student in NSE; Jiangtao Zhao, of the European Synchrotron; and Andrea Resta, of Synchrotron SOLEIL. The research was published Tuesday by the journal Scripta Materiala.

“Only with this technique can we measure strain with a nanoscale resolution during corrosion processes. Our goal is to bring such novel ideas to the nuclear science community while using synchrotrons both as an X-ray probe and radiation source,” adds Simonne.

Real-time imaging

Studying real-time failure of materials used in advanced nuclear reactors has long been a goal of Jossou’s research group.

Usually, researchers can only learn about such material failures after the fact, by removing the material from its environment and imaging it with a high-resolution instrument.

“We are interested in watching the process as it happens. If we can do that, we can follow the material from beginning to end and see when and how it fails. That helps us understand a material much better,” he says.

They simulate the process by firing an extremely focused X-ray beam at a sample to mimic the environment inside a nuclear reactor. The researchers must use a special type of high-intensity X-ray, which is only found in a handful of experimental facilities worldwide.

For these experiments they studied nickel, a material incorporated into alloys that are commonly used in advanced nuclear reactors. But before they could start the X-ray equipment, they had to prepare a sample.

To do this, the researchers used a process called solid state dewetting, which involves putting a thin film of the material onto a substrate and heating it to an extremely high temperature in a furnace until it transforms into single crystals.

“We thought making the samples was going to be a walk in the park, but it wasn’t,” Jossou says.

As the nickel heated up, it interacted with the silicon substrate and formed a new chemical compound, essentially derailing the entire experiment. After much trial-and-error, the researchers found that adding a thin layer of silicon dioxide between the nickel and substrate prevented this reaction.

But when crystals formed on top of the buffer layer, they were highly strained. This means the individual atoms had moved slightly to new positions, causing distortions in the crystal structure.

Phase retrieval algorithms can typically recover the 3D size and shape of a crystal in real-time, but if there is too much strain in the material, the algorithms will fail.

However, the team was surprised to find that keeping the X-ray beam trained on the sample for a longer period of time caused the strain to slowly relax, due to the silicon buffer layer. After a few extra minutes of X-rays, the sample was stable enough that they could utilize phase retrieval algorithms to accurately recover the 3D shape and size of the crystal.

“No one had been able to do that before. Now that we can make this crystal, we can image electrochemical processes like corrosion in real time, watching the crystal fail in 3D under conditions that are very similar to inside a nuclear reactor. This has far-reaching impacts,” he says.

They experimented with a different substrate, such as niobium doped strontium titanate, and found that only a silicon dioxide buffered silicon wafer created this unique effect.

An unexpected result

As they fine-tuned the experiment, the researchers discovered something else.

They could also use the X-ray beam to precisely control the amount of strain in the material, which could have implications for the development of microelectronics.

In the microelectronics community, engineers often introduce strain to deform a material’s crystal structure in a way that boosts its electrical or optical properties.

“With our technique, engineers can use X-rays to tune the strain in microelectronics while they are manufacturing them. While this was not our goal with these experiments, it is like getting two results for the price of one,” he adds.

In the future, the researchers want to apply this technique to more complex materials like steel and other metal alloys used in nuclear reactors and aerospace applications. They also want to see how changing the thickness of the silicon dioxide buffer layer impacts their ability to control the strain in a crystal sample.

“This discovery is significant for two reasons. First, it provides fundamental insight into how nanoscale materials respond to radiation — a question of growing importance for energy technologies, microelectronics, and quantum materials. Second, it highlights the critical role of the substrate in strain relaxation, showing that the supporting surface can determine whether particles retain or release strain when exposed to focused X-ray beams,” says Edwin Fohtung, an associate professor at the Rensselaer Polytechnic Institute, who was not involved with this work.

This work was funded, in part, by the MIT Faculty Startup Fund and the U.S. Department of Energy. The sample preparation was carried out, in part, at the MIT.nano facilities.


Professor Emeritus Rainer Weiss, influential physicist who forged new paths to understanding the universe, dies at 92

The longtime MIT professor shared a Nobel Prize for his role in developing the LIGO observatory and detecting gravitational waves.


MIT Professor Emeritus Rainer Weiss ’55, PhD ’62, a renowned experimental physicist and Nobel laureate whose groundbreaking work confirmed a longstanding prediction about the nature of the universe, passed away on Aug. 25. He was 92.

Weiss conceived of the Laser Interferometer Gravitational-Wave Observatory (LIGO) for detecting ripples in space-time known as gravitational waves, and was later a leader of the team that built LIGO and achieved the first-ever detection of gravitational waves. He shared the Nobel Prize in Physics for this work in 2017. Together with international collaborators, he and his colleagues at LIGO would go on to detect many more of these cosmic reverberations, opening up a new way for scientists to view the universe.

During his remarkable career, Weiss also developed a more precise atomic clock and figured out how to measure the spectrum of the cosmic microwave background via a weather balloon. He later co-founded and advanced the NASA Cosmic Background Explorer project, whose measurements helped support the Big Bang theory describing the expansion of the universe.

“Rai leaves an indelible mark on science and a gaping hole in our lives,” says Nergis Mavalvala PhD ’97, dean of the MIT School of Science and the Curtis and Kathleen Marble Professor of Astrophysics. As a doctoral student with Weiss in the 1990s, Mavalvala worked with him to build an early prototype of a gravitational-wave detector as part of her PhD thesis. “He will be so missed but has also gifted us a singular legacy. Every gravitational wave event we observe will remind us of him, and we will smile. I am indeed heartbroken, but also so grateful for having him in my life, and for the incredible gifts he has given us — of passion for science and discovery, but most of all to always put people first.” she says.

A member of the MIT physics faculty since 1964, Weiss was known as a committed mentor and teacher, as well as a dedicated researcher. 

“Rai’s ingenuity and insight as an experimentalist and a physicist were legendary,” says Deepto Chakrabarty, the William A. M. Burden Professor in Astrophysics and head of the Department of Physics. “His no-nonsense style and gruff manner belied a very close, supportive and collaborative relationship with his students, postdocs, and other mentees. Rai was a thoroughly MIT product.”

“Rai held a singular position in science: He was the creator of two fields — measurements of the cosmic microwave background and of gravitational waves. His students have gone on to lead both fields and carried Rai’s rigor and decency to both. He not only created a huge part of important science, he also populated them with people of the highest caliber and integrity,” says Peter Fisher, the Thomas A. Frank Professor of Physics and former head of the physics department.

Enabling a new era in astrophysics

LIGO is a system of two identical detectors located 1,865 miles apart. By sending finely tuned lasers back and forth through the detectors, scientists can detect perturbations caused by gravitational waves, whose existence was proposed by Albert Einstein. These discoveries illuminate ancient collisions and other events in the early universe, and have confirmed Einstein’s theory of general relativity. Today, the LIGO Scientific Collaboration involves hundreds of scientists at MIT, Caltech, and other universities, and with the Virgo and KAGRA observatories in Italy and Japan makes up the global LVK Collaboration — but five decades ago, the instrument concept was an MIT class exercise conceived by Weiss.

As he told MIT News in 2017, in generating the initial idea, Weiss wondered: “What’s the simplest thing I can think of to show these students that you could detect the influence of a gravitational wave?”

To realize the audacious design, Weiss teamed up in 1976 with physicist Kip Thorne, who, based in part on conversations with Weiss, soon seeded the creation of a gravitational wave experiment group at Caltech. The two formed a collaboration between MIT and Caltech, and in 1979, the late Scottish physicist Ronald Drever, then of the University of Glasgow, joined the effort at Caltech. The three scientists — who became the co-founders of LIGO — worked to refine the dimensions and scientific requirements for an instrument sensitive enough to detect a gravitational wave. Barry Barish later joined the team at Caltech, helping to secure funding and bring the detectors to completion.

After receiving support from the National Science Foundation, LIGO broke ground in the mid-1990s, constructing interferometric detectors in Hanford, Washington, and in Livingston, Louisiana. 

Years later, when he shared the Nobel Prize with Thorne and Barish for his work on LIGO, Weiss noted that hundreds of colleagues had helped to push forward the search for gravitational waves.

“The discovery has been the work of a large number of people, many of whom played crucial roles,” Weiss said at an MIT press conference. “I view receiving this [award] as sort of a symbol of the various other people who have worked on this.”

He continued: “This prize and others that are given to scientists is an affirmation by our society of [the importance of] gaining information about the world around us from reasoned understanding of evidence.”

“While I have always been amazed and guided by Rai’s ingenuity, integrity, and humility, I was most impressed by his breadth of vision and ability to move between worlds,” says Matthew Evans, the MathWorks Professor of Physics. “He could seamlessly shift from the smallest technical detail of an instrument to the global vision for a future observatory. In the last few years, as the idea for a next-generation gravitational-wave observatory grew, Rai would often be at my door, sharing ideas for how to move the project forward on all levels. These discussions ranged from quantum mechanics to global politics, and Rai’s insights and efforts have set the stage for the future.”

A lifelong fascination with hard problems

Weiss was born in 1932 in Berlin. The young family fled Nazi Germany to Prague and then emigrated to New York City, where Weiss grew up with a love for classical music and electronics, earning money by fixing radios.

He enrolled at MIT, then dropped out of school in his junior year, only to return shortly after, taking a job as a technician in the former Building 20. There, Weiss met physicist Jerrold Zacharias, who encouraged him in finishing his undergraduate degree in 1955 and his PhD in 1962.

Weiss spent some time at Princeton University as a postdoc in the legendary group led by Robert Dicke, where he developed experiments to test gravity. He returned to MIT as an assistant professor in 1964, starting a new research group in the Research Laboratory of Electronics dedicated to research in cosmology and gravitation.

Weiss received numerous awards and honors in addition to the Nobel Prize, including the Medaille de l’ADION, the 2006 Gruber Prize in Cosmology, and the 2007 Einstein Prize of the American Physical Society. He was a fellow of the American Association for the Advancement of Science, the American Academy of Arts and Sciences, and the American Physical Society, as well as a member of the National Academy of Sciences. In 2016, Weiss received a Special Breakthrough Prize in Fundamental Physics, the Gruber Prize in Cosmology, the Shaw Prize in Astronomy, and the Kavli Prize in Astrophysics, all shared with Drever and Thorne. He also shared the Princess of Asturias Award for Technical and Scientific Research with Thorne, Barry Barish of Caltech, and the LIGO Scientific Collaboration.

Weiss is survived by his wife, Rebecca; his daughter, Sarah, and her husband, Tony; his son, Benjamin, and his wife, Carla; and a grandson, Sam, and his wife, Constance. Details about a memorial are forthcoming.

This article may be updated.


Simpler models can outperform deep learning at climate prediction

New research shows the natural variability in climate data can cause AI models to struggle at predicting local temperature and rainfall.


Environmental scientists are increasingly using enormous artificial intelligence models to make predictions about changes in weather and climate, but a new study by MIT researchers shows that bigger models are not always better.

The team demonstrates that, in certain climate scenarios, much simpler, physics-based models can generate more accurate predictions than state-of-the-art deep-learning models.

Their analysis also reveals that a benchmarking technique commonly used to evaluate machine-learning techniques for climate predictions can be distorted by natural variations in the data, like fluctuations in weather patterns. This could lead someone to believe a deep-learning model makes more accurate predictions when that is not the case.

The researchers developed a more robust way of evaluating these techniques, which shows that, while simple models are more accurate when estimating regional surface temperatures, deep-learning approaches can be the best choice for estimating local rainfall.

They used these results to enhance a simulation tool known as a climate emulator, which can rapidly simulate the effect of human activities onto a future climate.

The researchers see their work as a “cautionary tale” about the risk of deploying large AI models for climate science. While deep-learning models have shown incredible success in domains such as natural language, climate science contains a proven set of physical laws and approximations, and the challenge becomes how to incorporate those into AI models.

“We are trying to develop models that are going to be useful and relevant for the kinds of things that decision-makers need going forward when making climate policy choices. While it might be attractive to use the latest, big-picture machine-learning model on a climate problem, what this study shows is that stepping back and really thinking about the problem fundamentals is important and useful,” says study senior author Noelle Selin, a professor in the MIT Institute for Data, Systems, and Society (IDSS) and the Department of Earth, Atmospheric and Planetary Sciences (EAPS), and director of the Center for Sustainability Science and Strategy.

Selin’s co-authors are lead author Björn Lütjens, a former EAPS postdoc who is now a research scientist at IBM Research; senior author Raffaele Ferrari, the Cecil and Ida Green Professor of Oceanography in EAPS and co-director of the Lorenz Center; and Duncan Watson-Parris, assistant professor at the University of California at San Diego. Selin and Ferrari are also co-principal investigators of the Bringing Computation to the Climate Challenge project, out of which this research emerged. The paper appears today in the Journal of Advances in Modeling Earth Systems.

Comparing emulators

Because the Earth’s climate is so complex, running a state-of-the-art climate model to predict how pollution levels will impact environmental factors like temperature can take weeks on the world’s most powerful supercomputers.

Scientists often create climate emulators, simpler approximations of a state-of-the art climate model, which are faster and more accessible. A policymaker could use a climate emulator to see how alternative assumptions on greenhouse gas emissions would affect future temperatures, helping them develop regulations.

But an emulator isn’t very useful if it makes inaccurate predictions about the local impacts of climate change. While deep learning has become increasingly popular for emulation, few studies have explored whether these models perform better than tried-and-true approaches.

The MIT researchers performed such a study. They compared a traditional technique called linear pattern scaling (LPS) with a deep-learning model using a common benchmark dataset for evaluating climate emulators.

Their results showed that LPS outperformed deep-learning models on predicting nearly all parameters they tested, including temperature and precipitation.

“Large AI methods are very appealing to scientists, but they rarely solve a completely new problem, so implementing an existing solution first is necessary to find out whether the complex machine-learning approach actually improves upon it,” says Lütjens.

Some initial results seemed to fly in the face of the researchers’ domain knowledge. The powerful deep-learning model should have been more accurate when making predictions about precipitation, since those data don’t follow a linear pattern.

They found that the high amount of natural variability in climate model runs can cause the deep learning model to perform poorly on unpredictable long-term oscillations, like El Niño/La Niña. This skews the benchmarking scores in favor of LPS, which averages out those oscillations.

Constructing a new evaluation

From there, the researchers constructed a new evaluation with more data that address natural climate variability. With this new evaluation, the deep-learning model performed slightly better than LPS for local precipitation, but LPS was still more accurate for temperature predictions.

“It is important to use the modeling tool that is right for the problem, but in order to do that you also have to set up the problem the right way in the first place,” Selin says.

Based on these results, the researchers incorporated LPS into a climate emulation platform to predict local temperature changes in different emission scenarios.

“We are not advocating that LPS should always be the goal. It still has limitations. For instance, LPS doesn’t predict variability or extreme weather events,” Ferrari adds.

Rather, they hope their results emphasize the need to develop better benchmarking techniques, which could provide a fuller picture of which climate emulation technique is best suited for a particular situation.

“With an improved climate emulation benchmark, we could use more complex machine-learning methods to explore problems that are currently very hard to address, like the impacts of aerosols or estimations of extreme precipitation,” Lütjens says.

Ultimately, more accurate benchmarking techniques will help ensure policymakers are making decisions based on the best available information.

The researchers hope others build on their analysis, perhaps by studying additional improvements to climate emulation methods and benchmarks. Such research could explore impact-oriented metrics like drought indicators and wildfire risks, or new variables like regional wind speeds.

This research is funded, in part, by Schmidt Sciences, LLC, and is part of the MIT Climate Grand Challenges team for “Bringing Computation to the Climate Challenge.”


On the joys of being head of house at McCormick Hall

Raul Radovitzky and Flavia Cardarelli reflect on a decade of telling bad dad jokes, learning Taylor Swift songs, and sharing a home with hundreds of students.


While sharing a single cup of coffee, Raul Radovitzky, the Jerome C. Hunsaker Professor in the Department of Aeronautics and Astronautics, and his wife Flavia Cardarelli, senior administrative assistant in the Institute for Data, Systems, and Society, recently discussed the love they have for their “nighttime jobs” living in McCormick Hall as faculty heads of house, and explained why it is so gratifying for them to be a part of this community.

The couple, married for 32 years, first met playing in a sandbox at the age of 3 in Argentina (but didn't start dating until they were in their 20s). Radovitzky has been a part of the MIT ecosystem since 2001, while Cardarelli began working at MIT in 2006. They became heads of house at McCormick Hall, the only all-female residence hall on campus, in 2015, and recently applied to extend their stay.

“Our head-of-house role is always full of surprises. We never know what we’ll encounter, but we love it. Students think we do this just for them, but in truth, it’s very rewarding for us as well. It keeps us on our toes and brings a lot of joy,” says Cardarelli. “We like to think of ourselves as the cool aunt and uncle for the students,” Radovitzky adds.

Heads of house at MIT influence many areas of students’ development by acting as advisors and mentors to their residents. Additionally, they work closely with the residence hall’s student government, as well as staff from the Division of Student Life, to foster their community’s culture.

Vice Chancellor for Student Life Suzy Nelson explains, “Our faculty heads of house have the long view at MIT and care deeply about students’ academic and personal growth. We are fortunate to have such dedicated faculty who serve in this way. The heads of house enhance the student experience in so many ways — whether it is helping a student with a personal problem, hosting Thanksgiving dinner for students who were not able to go home, or encouraging students to get involved in new activities, they are always there for students.”

“Our heads of house help our students fully participate in residential life. They model civil discourse at community dinners, mentor and tutor residents, and encourage residents to try new things. With great expertise and aplomb, they formally and informally help our students become their whole selves,” says Chancellor Melissa Nobles.

“I love teaching, I love conducting research with my group, and I enjoy serving as a head of house. The community aspect is deeply meaningful to me. MIT has become such a central part of our lives. Our kids are both MIT graduates, and we are incredibly proud of them. We do have a life outside of MIT — weekends with friends and family, personal activities — but MIT is a big part of who we are. It’s more than a job; it’s a community. We live on campus, and while it can be intense and demanding, we really love it,” says Radovitzky.

Jessica Quaye ’20, a former resident of McCormick Hall, says, “what sets McCormick apart is the way Raul and Flavia transform the four dorm walls into a home for everyone. You might come to McCormick alone, but you never leave alone. If you ran into them somewhere on campus, you could be sure that they would call you out and wave excitedly. You could invite Raul and Flavia to your concerts and they would show up to support your extracurricular endeavors. They built an incredible family that carries the fabric of MIT with a blend of academic brilliance, a warm open-door policy, and unwavering support for our extracurricular pursuits.”

Soundbytes

Q: What first drew you to the heads of house role?

Radovitzky: I had been aware of the role since I arrived at MIT, and over time, I started to wonder if it might be something we’d consider. When our kids were young, it didn’t seem feasible — we lived in the suburbs, and life there was good. But I always had an innate interest in building stronger connections with the student community.

Later, several colleagues encouraged us to apply. I discussed it with the family. Everyone was excited about it. Our teenagers were thrilled by the idea of living on a college campus. We applied together, submitting a letter as a family explaining why we were so passionate about it. We interviewed at McCormick, Baker, and McGregor. When we were offered McCormick, I’ll admit — I was nervous. I wasn’t sure I’d be the right fit for an all-female residence.

Cardarelli: We would have been nervous no matter where we ended up, but McCormick felt like home. It suited us in ways we didn’t anticipate. Raul, for instance, discovered he had a real rapport with the students, telling goofy jokes, making karaoke playlists, and learning about Taylor Swift and Nicki Minaj.

Radovitzky: It’s true! I never knew I’d become an expert at picking karaoke playlists. But we found our rhythm here, and it’s been deeply rewarding.

Q: What makes the McCormick community special?

Radovitzky: McCormick has a unique spirit. I can step out of our apartment and be greeted by 10 smiling faces. That energy is contagious. It’s not just about events or programming — it’s about building trust. We’ve built traditions around that, like our “make your own pizza” nights in our apartment, a wonderful McCormick event we inherited from our predecessors. We host four sessions each spring in which students roll out dough, choose toppings, and we chat as we cook and eat together. Everyone remembers the pizza nights — they’re mentioned in every testimonial.

Cardarelli: We’ve been lucky to have amazing graduate resident assistants and area directors every year. They’re essential partners in building community. They play a key role in creating community and supporting the students on their floors. They help with everything — from tutoring to events to walking students to urgent care if needed.

Radovitzky: In the fall, we take our residents to Crane Beach and host a welcome brunch. Karaoke in our apartment is a big hit too, and a unique way to make them comfortable coming to our apartment from day one. We do it three times a year — during orientation, and again each semester.

Cardarelli: We also host monthly barbecues open to all dorms and run McFast, our first-year tutoring program. Raul started by tutoring physics and math, four hours a week. Now, upperclass students lead most of the sessions. It’s great for both academic support and social connection.

Radovitzky: We also have an Independent Activities Period pasta night tradition. We cook for around 100 students, using four sauces that Flavia makes from scratch — bolognese, creamy mushroom, marinara, and pesto. Students love it.

Q: What’s unique about working in an all-female residence hall?

Cardarelli: I’ve helped students hem dresses, bake, and even apply makeup. It’s like having hundreds of daughters.

Radovitzky: The students here are incredibly mature and engaged. They show real interest in us as people. Many of the activities and connections we’ve built wouldn’t be possible in a different setting. Every year during “de-stress night,” I get my nails painted every color and have a face mask on. During “Are You Smarter Than an MIT Professor,” they dunk me in a water tank.


Engineering fantasy into reality

PhD student Erik Ballesteros is building “Doc Ock” arms for future astronauts.


Growing up in the suburban town of Spring, Texas, just outside of Houston, Erik Ballesteros couldn’t help but be drawn in by the possibilities for humans in space.

It was the early 2000s, and NASA’s space shuttle program was the main transport for astronauts to the International Space Station (ISS). Ballesteros’ hometown was less than an hour from Johnson Space Center (JSC), where NASA’s mission control center and astronaut training facility are based. And as often as they could, he and his family would drive to JSC to check out the center’s public exhibits and presentations on human space exploration.

For Ballesteros, the highlight of these visits was always the tram tour, which brings visitors to JSC’s Astronaut Training Facility. There, the public can watch astronauts test out spaceflight prototypes and practice various operations in preparation for living and working on the International Space Station.

“It was a really inspiring place to be, and sometimes we would meet astronauts when they were doing signings,” he recalls. “I’d always see the gates where the astronauts would go back into the training facility, and I would think: One day I’ll be on the other side of that gate.”

Today, Ballesteros is a PhD student in mechanical engineering at MIT, and has already made good on his childhood goal. Before coming to MIT, he interned on multiple projects at JSC, working in the training facility to help test new spacesuit materials, portable life support systems, and a propulsion system for a prototype Mars rocket. He also helped train astronauts to operate the ISS’ emergency response systems.

Those early experiences steered him to MIT, where he hopes to make a more direct impact on human spaceflight. He and his advisor, Harry Asada, are building a system that will quite literally provide helping hands to future astronauts. The system, dubbed SuperLimbs, consists of a pair of wearable robotic arms that extend out from a backpack, similar to the fictional Inspector Gadget, or Doctor Octopus (“Doc Ock,” to comic book fans). Ballesteros and Asada are designing the robotic arms to be strong enough to lift an astronaut back up if they fall. The arms could also crab-walk around a spacecraft’s exterior as an astronaut inspects or makes repairs.

Ballesteros is collaborating with engineers at the NASA Jet Propulsion Laboratory to refine the design, which he plans to introduce to astronauts at JSC in the next year or two, for practical testing and user feedback. He says his time at MIT has helped him make connections across academia and in industry that have fueled his life and work.

“Success isn’t built by the actions of one, but rather it’s built on the shoulders of many,” Ballesteros says. “Connections — ones that you not just have, but maintain — are so vital to being able to open new doors and keep great ones open.”

Getting a jumpstart

Ballesteros didn’t always seek out those connections. As a kid, he counted down the minutes until the end of school, when he could go home to play video games and watch movies, “Star Wars” being a favorite. He also loved to create and had a talent for cosplay, tailoring intricate, life-like costumes inspired by cartoon and movie characters.

In high school, he took an introductory class in engineering that challenged students to build robots from kits, that they would then pit against each other, BattleBots-style. Ballesteros built a robotic ball that moved by shifting an internal weight, similar to Star Wars’ fictional, sphere-shaped BB-8. 

“It was a good introduction, and I remember thinking, this engineering thing could be fun,” he says.

After graduating high school, Ballesteros attended the University of Texas at Austin, where he pursued a bachelor’s degree in aerospace engineering. What would typically be a four-year degree stretched into an eight-year period during which Ballesteros combined college with multiple work experiences, taking on internships at NASA and elsewhere. 

In 2013, he interned at Lockheed Martin, where he contributed to various aspects of jet engine development. That experience unlocked a number of other aerospace opportunities. After a stint at NASA’s Kennedy Space Center, he went on to Johnson Space Center, where, as part of a co-op program called Pathways, he returned every spring or summer over the next five years, to intern in various departments across the center.

While the time at JSC gave him a huge amount of practical engineering experience, Ballesteros still wasn’t sure if it was the right fit. Along with his childhood fascination with astronauts and space, he had always loved cinema and the special effects that forged them. In 2018, he took a year off from the NASA Pathways program to intern at Disney, where he spent the spring semester working as a safety engineer, performing safety checks on Disney rides and attractions.

During this time, he got to know a few people in Imagineering — the research and development group that creates, designs, and builds rides, theme parks, and attractions. That summer, the group took him on as an intern, and he worked on the animatronics for upcoming rides, which involved translating certain scenes in a Disney movie into practical, safe, and functional scenes in an attraction.

“In animation, a lot of things they do are fantastical, and it was our job to find a way to make them real,” says Ballesteros, who loved every moment of the experience and hoped to be hired as an Imagineer after the internship came to an end. But he had one year left in his undergraduate degree and had to move on.

After graduating from UT Austin in December 2019, Ballesteros accepted a position at NASA’s Jet Propulsion Laboratory in Pasadena, California. He started at JPL in February of 2020, working on some last adjustments to the Mars Perseverance rover. After a few months during which JPL shifted to remote work during the Covid pandemic, Ballesteros was assigned to a project to develop a self-diagnosing spacecraft monitoring system. While working with that team, he met an engineer who was a former lecturer at MIT. As a practical suggestion, she nudged Ballesteros to consider pursuing a master’s degree, to add more value to his CV.

“She opened up the idea of going to grad school, which I hadn’t ever considered,” he says.

Full circle

In 2021, Ballesteros arrived at MIT to begin a master’s program in mechanical engineering. In interviewing with potential advisors, he immediately hit it off with Harry Asada, the Ford Professor of Enginering and director of the d'Arbeloff Laboratory for Information Systems and Technology. Years ago, Asada had pitched JPL an idea for wearable robotic arms to aid astronauts, which they quickly turned down. But Asada held onto the idea, and proposed that Ballesteros take it on as a feasibility study for his master’s thesis.

The project would require bringing a seemingly sci-fi idea into practical, functional form, for use by astronauts in future space missions. For Ballesteros, it was the perfect challenge. SuperLimbs became the focus of his master’s degree, which he earned in 2023. His initial plan was to return to industry, degree in hand. But he chose to stay at MIT to pursue a PhD, so that he could continue his work with SuperLimbs in an environment where he felt free to explore and try new things.

“MIT is like nerd Hogwarts,” he says. “One of the dreams I had as a kid was about the first day of school, and being able to build and be creative, and it was the happiest day of my life. And at MIT, I felt like that dream became reality.”

Ballesteros and Asada are now further developing SuperLimbs. The team recently re-pitched the idea to engineers at JPL, who reconsidered, and have since struck up a partnership to help test and refine the robot. In the next year or two, Ballesteros hopes to bring a fully functional, wearable design to Johnson Space Center, where astronauts can test it out in space-simulated settings.

In addition to his formal graduate work, Ballesteros has found a way to have a bit of Imagineer-like fun. He is a member of the MIT Robotics Team, which designs, builds, and runs robots in various competitions and challenges. Within this club, Ballesteros has formed a sub-club of sorts, called the Droid Builders, that aim to build animatronic droids from popular movies and franchises.

“I thought I could use what I learned from Imagineering and teach undergrads how to build robots from the ground up,” he says. “Now we’re building a full-scale WALL-E that could be fully autonomous. It’s cool to see everything come full circle.”


New technologies tackle brain health assessment for the military

Tools build on years of research at Lincoln Laboratory to develop a rapid brain health screening capability and may also be applicable to civilian settings such as sporting events and medical offices.


Cognitive readiness denotes a person's ability to respond and adapt to the changes around them. This includes functions like keeping balance after tripping, or making the right decision in a challenging situation based on knowledge and past experiences. For military service members, cognitive readiness is crucial for their health and safety, as well as mission success. Injury to the brain is a major contributor to cognitive impairment, and between 2000 and 2024, more than 500,000 military service members were diagnosed with traumatic brain injury (TBI) — caused by anything from a fall during training to blast exposure on the battlefield. While impairment from factors like sleep deprivation can be treated through rest and recovery, others caused by injury may require more intense and prolonged medical attention.

"Current cognitive readiness tests administered to service members lack the sensitivity to detect subtle shifts in cognitive performance that may occur in individuals exposed to operational hazards," says Christopher Smalt, a researcher in the laboratory's Human Health and Performance Systems Group. "Unfortunately, the cumulative effects of these exposures are often not well-documented during military service or after transition to Veterans Affairs, making it challenging to provide effective support."

Smalt is part of a team at the laboratory developing a suite of portable diagnostic tests that provide near-real-time screening for brain injury and cognitive health. One of these tools, called READY, is a smartphone or tablet app that helps identify a potential change in cognitive performance in less than 90 seconds. Another tool, called MINDSCAPE — which is being developed in collaboration with Richard Fletcher, a visiting scientist in the Rapid Prototyping Group who leads the Mobile Technology Lab at the MIT Auto-ID Laboratory, and his students — uses virtual reality (VR) technology for a more in-depth analysis to pinpoint specific conditions such as TBI, post-traumatic stress disorder, or sleep deprivation. Using these tests, medical personnel on the battlefield can make quick and effective decisions for treatment triage.

Both READY and MINDSCAPE are a response to a series of Congressional legislation mandates, military program requirements, and mission-driven health needs to improve brain health screening capabilities for service members.

Cognitive readiness biomarkers

The READY and MINDSCAPE platforms incorporate more than a decade of laboratory research on finding the right indicators of cognitive readiness to build into rapid testing applications. Thomas Quatieri oversaw this work and identified balance, eye movement, and speech as three reliable biomarkers. He is leading the effort at Lincoln Laboratory to develop READY.

"READY stands for Rapid Evaluation of Attention for DutY, and is built on the premise that attention is the key to being 'ready' for a mission," he says. "In one view, we can think of attention as the mental state that allows you to focus on a task."

For someone to be attentive, their brain must continuously anticipate and process incoming sensory information and then instruct the body to respond appropriately. For example, if a friend yells "catch" and then throws a ball in your direction, in order to catch that ball, your brain must process the incoming auditory and visual data, predict in advance what may happen in the next few moments, and then direct your body to respond with an action that synchronizes those sensory data. The result? You realize from hearing the word "catch" and seeing the moving ball that your friend is throwing the ball to you, and you reach out a hand to catch it just in time.

"An unhealthy or fatigued brain — caused by TBI or sleep deprivation, for example — may have challenges within a neurosensory feed-forward [prediction] or feedback [error] system, thus hampering the person's ability to attend," Quatieri says.

READY's three tests measure a person’s ability to track a moving dot with their eye, balance, and hold a vowel fixed at one pitch. The app then uses the data to calculate a variability or "wobble" indicator, which represents changes from the test taker's baseline or from expected results based on others with similar demographics, or the general population. The results are displayed to the user as an indication of the patient's level of attention.

If the READY screen shows an impairment, the administrator can then direct the subject to follow up with MINDSCAPE, which stands for Mobile Interface for Neurological Diagnostic Situational Cognitive Assessment and Psychological Evaluation. MINDSCAPE uses VR technology to administer additional, in-depth tests to measure cognitive functions such as reaction time and working memory. These standard neurocognitive tests are recorded with multimodal physiological sensors, such as electroencephalography (EEG), photoplethysmography, and pupillometry, to better pinpoint diagnosis.

Holistic and adaptable

A key advantage of READY and MINDSCAPE is their ability to leverage existing technologies, allowing for rapid deployment in the field. By utilizing sensors and capabilities already integrated into smartphones, tablets, and VR devices, these assessment tools can be easily adapted for use in operational settings at a significantly reduced cost.

"We can immediately apply our advanced algorithms to the data collected from these devices, without the need for costly and time-consuming hardware development," Smalt says. "By harnessing the capabilities of commercially available technologies, we can quickly provide valuable insights and improve upon traditional assessment methods."

Bringing new capabilities and AI for brain-health sensing into operational environments is a theme across several projects at the laboratory. Another example is EYEBOOM (Electrooculography and Balance Blast Overpressure Monitoring System), a wearable technology developed for the U.S. Special Forces to monitor blast exposure. EYEBOOM continuously monitors a wearer's eye and body movements as they experience blast energy, and warns of potential harm. For this program, the laboratory developed an algorithm that could identify a potential change in physiology resulting from blast exposure during operations, rather than waiting for a check-in.

All three technologies are in development to be versatile, so they can be adapted for other relevant uses. For example, a workflow could pair EYEBOOM's monitoring capabilities with the READY and MINDSCAPE tests: EYEBOOM would continuously monitor for exposure risk and then prompt the wearer to seek additional assessment.

"A lot of times, research focuses on one specific modality, whereas what we do at the laboratory is search for a holistic solution that can be applied for many different purposes," Smalt says.

MINDSCAPE is undergoing testing at the Walter Reed National Military Center this year. READY will be tested with the U.S. Army Research Institute of Environmental Medicine (USARIEM) in 2026 in the context of sleep deprivation. Smalt and Quatieri also see the technologies finding use in civilian settings — on sporting event sidelines, in doctors' offices, or wherever else there is a need to assess brain readiness.

MINDSCAPE is being developed with clinical validation and support from Stefanie Kuchinsky at the Walter Reed National Military Medical Center. Quatieri and his team are developing the READY tests in collaboration with Jun Maruta and Jam Ghajar from the Brain Trauma Foundation (BTF), and Kristin Heaton from USARIEM. The tests are supported by concurrent evidence-based guidelines lead by the BTF and the Military TBI Initiative at Uniform Services University.


Can large language models figure out the real world?

New test could help determine if AI systems that make accurate predictions in one area can understand it well enough to apply that ability to a different area.


Back in the 17th century, German astronomer Johannes Kepler figured out the laws of motion that made it possible to accurately predict where our solar system’s planets would appear in the sky as they orbit the sun. But it wasn’t until decades later, when Isaac Newton formulated the universal laws of gravitation, that the underlying principles were understood. Although they were inspired by Kepler’s laws, they went much further, and made it possible to apply the same formulas to everything from the trajectory of a cannon ball to the way the moon’s pull controls the tides on Earth — or how to launch a satellite from Earth to the surface of the moon or planets.

Today’s sophisticated artificial intelligence systems have gotten very good at making the kind of specific predictions that resemble Kepler’s orbit predictions. But do they know why these predictions work, with the kind of deep understanding that comes from basic principles like Newton’s laws? As the world grows ever-more dependent on these kinds of AI systems, researchers are struggling to try to measure just how they do what they do, and how deep their understanding of the real world actually is.

Now, researchers in MIT’s Laboratory for Information and Decision Systems (LIDS) and at Harvard University have devised a new approach to assessing how deeply these predictive systems understand their subject matter, and whether they can apply knowledge from one domain to a slightly different one. And by and large the answer at this point, in the examples they studied, is — not so much.

The findings were presented at the International Conference on Machine Learning, in Vancouver, British Columbia, last month by Harvard postdoc Keyon Vafa, MIT graduate student in electrical engineering and computer science and LIDS affiliate Peter G. Chang, MIT assistant professor and LIDS principal investigator Ashesh Rambachan, and MIT professor, LIDS principal investigator, and senior author Sendhil Mullainathan.

“Humans all the time have been able to make this transition from good predictions to world models,” says Vafa, the study’s lead author. So the question their team was addressing was, “have foundation models — has AI — been able to make that leap from predictions to world models? And we’re not asking are they capable, or can they, or will they. It’s just, have they done it so far?” he says.

“We know how to test whether an algorithm predicts well. But what we need is a way to test for whether it has understood well,” says Mullainathan, the Peter de Florez Professor with dual appointments in the MIT departments of Economics and Electrical Engineering and Computer Science and the senior author on the study. “Even defining what understanding means was a challenge.” 

In the Kepler versus Newton analogy, Vafa says, “they both had models that worked really well on one task, and that worked essentially the same way on that task. What Newton offered was ideas that were able to generalize to new tasks.” That capability, when applied to the predictions made by various AI systems, would entail having it develop a world model so it can “transcend the task that you’re working on and be able to generalize to new kinds of problems and paradigms.”

Another analogy that helps to illustrate the point is the difference between centuries of accumulated knowledge of how to selectively breed crops and animals, versus Gregor Mendel’s insight into the underlying laws of genetic inheritance.

“There is a lot of excitement in the field about using foundation models to not just perform tasks, but to learn something about the world,” for example in the natural sciences, he says. “It would need to adapt, have a world model to adapt to any possible task.”

Are AI systems anywhere near the ability to reach such generalizations? To test the question, the team looked at different examples of predictive AI systems, at different levels of complexity. On the very simplest of examples, the systems succeeded in creating a realistic model of the simulated system, but as the examples got more complex that ability faded fast.

The team developed a new metric, a way of measuring quantitatively how well a system approximates real-world conditions. They call the measurement inductive bias — that is, a tendency or bias toward responses that reflect reality, based on inferences developed from looking at vast amounts of data on specific cases.

The simplest level of examples they looked at was known as a lattice model. In a one-dimensional lattice, something can move only along a line. Vafa compares it to a frog jumping between lily pads in a row. As the frog jumps or sits, it calls out what it’s doing — right, left, or stay. If it reaches the last lily pad in the row, it can only stay or go back. If someone, or an AI system, can just hear the calls, without knowing anything about the number of lily pads, can it figure out the configuration? The answer is yes: Predictive models do well at reconstructing the “world” in such a simple case. But even with lattices, as you increase the number of dimensions, the systems no longer can make that leap.

“For example, in a two-state or three-state lattice, we showed that the model does have a pretty good inductive bias toward the actual state,” says Chang. “But as we increase the number of states, then it starts to have a divergence from real-world models.”

A more complex problem is a system that can play the board game Othello, which involves players alternately placing black or white disks on a grid. The AI models can accurately predict what moves are allowable at a given point, but it turns out they do badly at inferring what the overall arrangement of pieces on the board is, including ones that are currently blocked from play.

The team then looked at five different categories of predictive models actually in use, and again, the more complex the systems involved, the more poorly the predictive modes performed at matching the true underlying world model.

With this new metric of inductive bias, “our hope is to provide a kind of test bed where you can evaluate different models, different training approaches, on problems where we know what the true world model is,” Vafa says. If it performs well on these cases where we already know the underlying reality, then we can have greater faith that its predictions may be useful even in cases “where we don’t really know what the truth is,” he says.

People are already trying to use these kinds of predictive AI systems to aid in scientific discovery, including such things as properties of chemical compounds that have never actually been created, or of potential pharmaceutical compounds, or for predicting the folding behavior and properties of unknown protein molecules. “For the more realistic problems,” Vafa says, “even for something like basic mechanics, we found that there seems to be a long way to go.”

Chang says, “There’s been a lot of hype around foundation models, where people are trying to build domain-specific foundation models — biology-based foundation models, physics-based foundation models, robotics foundation models, foundation models for other types of domains where people have been collecting a ton of data” and training these models to make predictions, “and then hoping that it acquires some knowledge of the domain itself, to be used for other downstream tasks.”

This work shows there’s a long way to go, but it also helps to show a path forward. “Our paper suggests that we can apply our metrics to evaluate how much the representation is learning, so that we can come up with better ways of training foundation models, or at least evaluate the models that we’re training currently,” Chang says. “As an engineering field, once we have a metric for something, people are really, really good at optimizing that metric.”


At convocation, President Kornbluth greets the Class of 2029

“We believe in all of you,” MIT’s president said at the welcoming ceremony for new undergraduates.


In welcoming the undergraduate Class of 2029 to campus in Cambridge, Massachusetts, MIT President Sally Kornbluth began the Institute’s convocation on Sunday with a greeting that underscored MIT’s confidence in its new students.

“We believe in all of you, in the learning, making, discovering, and inventing that you all have come here to do,” Kornbluth said. “And in your boundless potential as future leaders who will help solve real problems that people face in their daily lives.”

She added: “If you’re out there feeling really lucky to be joining this incredible community, I want you to know that we feel even more lucky. We’re delighted and grateful that you chose to bring your talent, your energy, your curiosity, creativity, and drive here to MIT. And we’re thrilled to be starting this new year with all of you.”

The event, officially called the President’s Convocation for First-years and Families, was held at the Johnson Ice Rink on campus.

While recognizing that academic life can be “intense” at MIT, Kornbluth highlighted the many opportunities available to students outside the classroom, too. A biologist and cancer researcher herself, Kornbluth observed that students can participate in the Undergraduate Research Opportunities Program (UROP), which Kornbluth called “an unmissable opportunity to work side by side with MIT faculty at the front lines of research.” She also noted that MIT offers abundant opportunities for entrepreneurship, as well as 450 official student organizations.

“It’s okay to be a beginner,” Kornbluth said. “Join a group you wouldn’t have had time for in high school. Explore a new skill. Volunteer in the neighborhoods around campus.”

And if the transition to college feels daunting at any point, she added, MIT provides considerable resources to students for well-being and academic help.

“Sometimes the only way to succeed in facing a big challenge or solving a tough problem is to admit there’s no way you can do it all yourself,” Kornbluth observed. “You’re surrounded by a community of caring people. So please don’t be shy about asking for guidance and help.”

The large audience heard additional remarks from two faculty members who themselves have MIT degrees, reflecting on student life at the Institute.

As a student, “The most important things I had were a willingness to take risks and put hard work into the things I cared about,” said Ankur Moitra SM ’09, PhD ’11, the Norbert Wiener Professor of Mathematics.

He emphasized to students the importance of staying grounded and being true to themselves, especially in the face of, say, social media pressures.

“These are the things that make it harder to find your own way and what you really care about,” Moitra said. “Because the rest of the world’s opinion is right there staring you in the face, and it’s impossible to avoid it. And how will you discover what’s important to you, what’s worth pouring yourself into?”

Moitra also advised students to be wary of the tech tools “that want to do the thinking for you, but take away your agency” in the process. He added: “I worry about this because it’s going to become too easy to rely on these tools, and there are going to be many times you’re going to be tempted, especially late at night, with looming p-set deadlines. As educators, we don’t always have fixes for these kinds of things, and all we can do is open the door and hope you walk through it.”

Beyond that, he suggested,“Periodically remind yourself about what’s been important to you all along, what brought you here. For your next four years, you’re going to be surrounded by creative, clever, passionate people every day, who are going to challenge you. Rise to that challenge.” 

Christopher Palmer PhD ’14, an associate professor of finance in the MIT Sloan School of Management, began his remarks by revealing that his MIT undergraduate application was not accepted — although he later received his doctorate at the Institute and is now a tenured professor at MIT.

“I played the long game,” he quipped, drawing laughs.

Indeed, Palmer’s remarks focused on cultivating the resilience, focus, and concentration needed to flourish in the long run.

While being at MIT is “thrilling,” Palmer advised students to “build enough slack into your system to handle both the stress and take advantage of the opportunities” on campus. Much like a bank conducts a “stress test” to see if it can withstand changes, Palmer suggested, we can try the same with our workloads: “If you build a schedule that passes the stress test, that means time for curiosity and meaningful creativity.”

Students should also avoid the “false equivalency that your worth is determined by your achievements,” he added. “You have inherent, immutable, instrinsic, eternal value. Be discerning with your commitments. Future you will be so grateful that you have built in the capacity to sleep, to catch up, to say ‘Yes’ to cool invitations, and to attend to your mental health.”

Additionally, Palmer recommended that students pursue “deep work,” involving “the hard thinking where progress actually happens” — a concept, he noted, that has been elevated by computer scientist Cal Newport SM ’06, PhD ’09. As research shows, Palmer explained, “We can’t actually multitask. What we’re really doing is switching tasks at high frequency and incurring a small cost every single time we switch our focus.”

It might help students, he added, to try some structural changes: Put the phone away, turn off alerts, pause notifications, and cultivate sleep. A healthy blend of academic work, activities, and community fun can emerge.

Concluding her own remarks, Kornbluth also emphasized that attending MIT means being part of a community that is respectful of varying viewpoints and all people, and sustains an ethos of fair-minded understanding.

“I know you have extremely high expectations for yourselves,” Kornbluth said, adding: “We have high expectations for you, too, in all kinds of ways. But I want to emphasize one that’s more important than all the others — and that’s an expectation for how we treat each other. At MIT, the work we do is so important, and so hard, that it’s essential we treat each other with empathy, understanding and compassion. That we take care to express our own ideas with clarity and respect, and make room for sharply different points of view. And above all, that we keep engaging in conversation, even when it’s difficult, frustrating or painful.”


Transforming boating, with solar power

Solar electric vehicle pioneer James Worden ’89 brought his prototype solar electric boat to MIT to talk shop with students and share his vision for solar-powered boats.


The MIT Sailing Pavilion hosted an altogether different marine vessel recently: a prototype of a solar electric boat developed by James Worden ’89, the founder of the MIT Solar Electric Vehicle Team (SEVT). Worden visited the pavilion on a sizzling, sunny day in late July to offer students from the SEVT, the MIT Edgerton Center, MIT Sea Grant, and the broader community an inside look at the Anita, named for his late wife.

Worden’s fascination with solar power began at age 10, when he picked up a solar chip at a “hippy-like” conference in his hometown of Arlington, Massachusetts. “My eyes just lit up,” he says. He built his first solar electric vehicle in high school, fashioned out of cardboard and wood (taking first place at the 1984 Massachusetts Science Fair), and continued his journey at MIT, founding SEVT in 1986. It was through SEVT that he met his wife and lifelong business partner, Anita Rajan Worden ’90. Together, they founded two companies in the solar electric and hybrid vehicles space, and in 2022 launched a solar electric boat company.

On the Charles River, Worden took visitors for short rides on Anita, including a group of current SEVT students who peppered him with questions. The 20-foot pontoon boat, just 12 feet wide and 7 feet tall, is made of carbon fiber composites, single crystalline solar photovoltaic cells, and lithium iron phosphate battery cells. Ultimately, Worden envisions the prototype could have applications as mini-ferry boats and water taxis.

With warmth and humor, he drew parallels between the boat’s components and mechanics and those of the solar cars the students are building. “It’s fun! If you think about all the stuff you guys are doing, it’s all the same stuff,” he told them, “optimizing all the different systems and making them work.” He also explained the design considerations unique to boating applications, like refining the hull shape for efficiency and maneuverability in variable water and wind conditions, and the critical importance of protecting wiring and controls from open water and condensate.

“Seeing Anita in all its glory was super cool,” says Nicole Lin, vice captain of SEVT. “When I first saw it, I could immediately map the different parts of the solar car to its marine counterparts, which was astonishing to see how far I’ve come as an engineer with SEVT. James also explained the boat using solar car terms, as he drew on his experience with solar cars for his solar boats. It blew my mind to see the engineering we learned with SEVT in action.”

Over the years, the Wordens have been avid supporters of SEVT and the Edgerton Center, so the visit was, in part, a way to pay it forward to MIT. “There’s a lot of connections,” he says. He’s still awed by the fact that Harold “Doc” Edgerton, upon learning about his interest in building solar cars, carved out a lab space for him to use in Building 20 — as a first-year student. And a few years ago, as Worden became interested in marine vessels, he tapped Sea Grant Education Administrator Drew Bennett for a 90-minute whiteboard lecture, “MIT fire-hose style,” on hydrodynamics. “It was awesome!” he says.


Imaging tech promises deepest looks yet into living brain tissue at single-cell resolution

By combining several cutting-edge imaging technologies, a new microscope system could enable unprecedentedly deep and precise visualization of metabolic and neuronal activity, potentially even in humans.


For both research and medical purposes, researchers have spent decades pushing the limits of microscopy to produce ever deeper and sharper images of brain activity, not only in the cortex but also in regions underneath, such as the hippocampus. In a new study, a team of MIT scientists and engineers demonstrates a new microscope system capable of peering exceptionally deep into brain tissues to detect the molecular activity of individual cells by using sound.

“The major advance here is to enable us to image deeper at single-cell resolution,” says neuroscientist Mriganka Sur, a corresponding author along with mechanical engineering professor Peter So and principal research scientist Brian Anthony. Sur is the Paul and Lilah Newton Professor in The Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences at MIT.

In the journal Light: Science and Applications, the team demonstrates that they could detect NAD(P)H, a molecule tightly associated with cell metabolism in general and electrical activity in neurons in particular, all the way through samples such as a 1.1-millimeter “cerebral organoid,” a 3D-mini brain-like tissue generated from human stem cells, and a 0.7-milimeter-thick slice of mouse brain tissue.

In fact, says co-lead author and mechanical engineering postdoc W. David Lee, who conceived the microscope’s innovative design, the system could have peered far deeper, but the test samples weren’t big enough to demonstrate that.

“That’s when we hit the glass on the other side,” he says. “I think we’re pretty confident about going deeper.”

Still, a depth of 1.1 milimeters is more than five times deeper than other microscope technologies can resolve NAD(P)H within dense brain tissue. The new system achieved the depth and sharpness by combining several advanced technologies to precisely and efficiently excite the molecule and then to detect the resulting energy, all without having to add any external labels, either via added chemicals or genetically engineered fluorescence.

Rather than focusing the required NAD(P)H excitation energy on a neuron with near ultraviolet light at its normal peak absorption, the scope accomplishes the excitation by focusing an intense, extremely short burst of light (a quadrillionth of a second long) at three times the normal absorption wavelength. Such “three-photon” excitation penetrates deep into tissue with less scattering by brain tissue because of the longer wavelength of the light (“like fog lamps,” Sur says). Meanwhile, although the excitation produces a weak fluorescent signal of light from NAD(P)H, most of the absorbed energy produces a localized (about 10 microns) thermal expansion within the cell, which produces sound waves that travel relatively easily through tissue compared to the fluorescence emission. A sensitive ultrasound microphone in the microscope detects those waves and, with enough sound data, software turns them into high-resolution images (much like a sonogram does). Imaging created in this way is “three-photon photoacoustic imaging.”

“We merged all these techniques — three-photon, label-free, photoacoustic detection,” says co-lead author Tatsuya Osaki, a research scientist in the Picower Institute in Sur’s lab. “We integrated all these cutting-edge techniques into one process to establish this ‘Multiphoton-In and Acoustic-Out’ platform.”

Lee and Osaki combined with research scientist Xiang Zhang and postdoc Rebecca Zubajlo to lead the study, in which the team demonstrated reliable detection of the sound signal through the samples. So far, the team has produced visual images from the sound at various depths as they refine their signal processing.

In the study, the team also shows simultaneous “third-harmonic generation” imaging, which comes from the three-photon stimulation and finely renders cellular structures, alongside their photoacoustic imaging, which detects NAD(P)H. They also note that their photoacoustic method could detect other molecules such as the genetically encoded calcium indicator GCaMP, that neuroscientists use to signal neural electrical activity.

With the concept of label-free, multiphoton, photoacoustic microscopy (LF-MP-PAM) established in the paper, the team is now looking ahead to neuroscience and clinical applications.

For instance, through the company Precision Healing, Inc., which he founded and sold, Lee has already established that NAD(P)H imaging can inform wound care. In the brain, levels of the molecule are known to vary in conditions such as Alzheimer’s disease, Rett syndrome, and seizures, making it a potentially valuable biomarker. Because the new system is label-free (i.e., no added chemicals or altered genes), it could be used in humans, for instance, during brain surgeries.

The next step for the team is to demonstrate it in a living animal, rather than just in in vitro and ex-vivo tissues. The technical challenge there is that the microphone can no longer be on the opposite side of the sample from the light source (as it was in the current study). It has to be on top, just like the light source.

Lee says he expects that full imaging at depths of 2 milimeters in live brains is entirely feasible, given the results in the new study.

“In principle, it should work,” he says.

Mercedes Balcells and Elazer Edelman are also authors of the paper. Funding for the research came from sources including the National Institutes of Health, the Simon Center for the Social Brain, the lab of Peter So, The Picower Institute for Learning and Memory, and the Freedom Together Foundation.


Marcus Stergio named ombudsperson

Offering confidential, impartial support, the Ombuds Office helps faculty, students, and staff resolve issues affecting their work and studies at MIT.


Marcus Stergio will join the MIT Ombuds Office on Aug. 25, bringing over a decade of experience as a mediator and conflict-management specialist. Previously an ombuds at the U.S. Department of Labor, Stergio will be part of MIT’s ombuds team, working alongside Judi Segall.

The MIT Ombuds Office provides a confidential, independent resource for all members of the MIT community to constructively manage concerns and conflicts related to their experiences at MIT.

Established in 1980, the office played a key role in the early development of the profession, helping to develop and establish standards of practice for organizational ombuds offices. The ombudspersons help MIT community members analyze concerns, clarify policies and procedures, and identify options to constructively manage conflicts.

“There’s this aura and legend around MIT’s Ombuds Office that is really exciting,” Stergio says.

Among other types of conflict resolution, the work of an ombuds is particularly appealing for its versatility, according to Stergio. “We can be creative and flexible in figuring out which types of processes work for the people seeking support, whether that’s having one-on-one, informal, confidential conversations or exploring more active and involved ways of getting their issues addressed,” he says.

Prior to coming to MIT, Stergio worked for six years at the Department of Labor, where he established a new externally facing ombuds office for the Office of Federal Contract Compliance Programs (OFCCP). There, he operated in accordance with the International Ombuds Association’s standards of practice, offering ombuds services to both external stakeholders and OFCCP employees.

He has also served as ombudsperson or in other conflict-management roles for a variety of organizations across multiple sectors. These included the Centers for Disease Control and Prevention, the United Nations Population Fund, General Motors, BMW of North America, and the U.S. Department of Treasury, among others. From 2013 to 2019, Stergio was a mediator and the manager of commercial and corporate programs for the Boston-based dispute resolution firm MWI.

Stergio has taught conflict resolution courses and delivered mediation and negotiation workshops at multiple universities, including MIT, where he says the interest in his subject matter was palpable. “There was something about the MIT community, whether it was students or staff or faculty. People seemed really energized by the conflict management skills that I was presenting to them,” he recalls. “There was this eagerness to perfect things that was inspiring and contagious.”

“I’m honored to be joining such a prestigious institution, especially one with such a rich history in the ombuds field,” Stergio adds. “I look forward to building on that legacy and working with the MIT community to navigate challenges together.”

Stergio earned a bachelor’s degree from Northeastern University in 2008 and a master’s in conflict resolution from the University of Massachusetts at Boston in 2012. He has served on the executive committee of the Coalition of Federal Ombuds since 2022, as co-chair of the American Bar Association’s ombuds day subcommittee, and as an editor for the newsletter of the ABA’s Dispute Resolution Section. He is also a member of the International Ombuds Association.


Astronomers detect the brightest fast radio burst of all time

The dazzling “RBFLOAT” radio burst, originating in a nearby galaxy, offers the clearest view yet of the environment around these mysterious flashes.


A fast radio burst is an immense flash of radio emission that lasts for just a few milliseconds, during which it can momentarily outshine every other radio source in its galaxy. These flares can be so bright that their light can be seen from halfway across the universe, several billion light years away.

The sources of these brief and dazzling signals are unknown. But scientists now have a chance to study a fast radio burst (FRB) in unprecedented detail. An international team of scientists including physicists at MIT have detected a near and ultrabright fast radio burst some 130 million light-years from Earth in the constellation Ursa Major. It is one of the closest FRBs detected to date. It is also the brightest — so bright that the signal has garnered the informal moniker, RBFLOAT, for “radio brightest flash of all time.”

The burst’s brightness, paired with its proximity, is giving scientists the closest look yet at FRBs and the environments from which they emerge.

“Cosmically speaking, this fast radio burst is just in our neighborhood,” says Kiyoshi Masui, associate professor of physics and affiliate of MIT’s Kavli Institute for Astrophysics and Space Research. “This means we get this chance to study a pretty normal FRB in exquisite detail.”

Masui and his colleagues report their findings today in the Astrophysical Journal Letters.

Diverse bursts

The clarity of the new detection is thanks to a significant upgrade to The Canadian Hydrogen Intensity Mapping Experiment (CHIME), a large array of halfpipe-shaped antennae based in British Columbia. CHIME was originally designed to detect and map the distribution of hydrogen across the universe. The telescope is also sensitive to ultrafast and bright radio emissions. Since it started observations in 2018, CHIME has detected about 4,000 fast radio bursts, from all parts of the sky. But the telescope had not been able to precisely pinpoint the location of each fast radio burst, until now.

CHIME recently got a significant boost in precision, in the form of CHIME Outriggers — three miniature versions of CHIME, each sited in different parts of North America. Together, the telescopes work as one continent-sized system that can focus in on any bright flash that CHIME detects, to pin down its location in the sky with extreme precision.

“Imagine we are in New York and there’s a firefly in Florida that is bright for a thousandth of a second, which is usually how quick FRBs are,” says MIT Kavli graduate student Shion Andrew. “Localizing an FRB to a specific part of its host galaxy is analogous to figuring out not just what tree the firefly came from, but which branch it’s sitting on.”

The new fast radio burst is the first detection made using the combination of CHIME and the completed CHIME Outriggers. Together, the telescope array identified the FRB and determined not only the specific galaxy, but also the region of the galaxy from where the burst originated. It appears that the burst arose from the edge of the galaxy, just outside of a star-forming region. The precise localization of the FRB is allowing scientists to study the environment around the signal for clues to what brews up such bursts.

“As we’re getting these much more precise looks at FRBs, we’re better able to see the diversity of environments they’re coming from,” says MIT physics postdoc Adam Lanman.

Lanman, Andrew, and Masui are members of the CHIME Collaboration — which includes scientists from multiple institutions around the world — and are authors of the new paper detailing the discovery of the new FRB detection.

An older edge

Each of CHIME’s Outrigger stations continuously monitors the same swath of sky as the parent CHIME array. Both CHIME and the Outriggers “listen” for radio flashes, at incredibly short, millisecond timescales. Even over several minutes, such precision monitoring can amount to a huge amount of data. If CHIME detects no FRB signal, the Outriggers automatically delete the last 40 seconds of data to make room for the next span of measurements.

On March 16, 2025, CHIME detected an ultrabright flash of radio emissions, which automatically triggered the CHIME Outriggers to record the data. Initially, the flash was so bright that astronomers were unsure whether it was an FRB or simply a terrestrial event caused, for instance, by a burst of cellular communications.

That notion was put to rest as the CHIME Outrigger telescopes focused in on the flash and pinned down its location to NGC4141 — a spiral galaxy in the constellation Ursa Major about 130 million light years away, which happens to be surprisingly close to our own Milky Way. The detection is one of the closest and brightest fast radio bursts detected to date.

Follow-up observations in the same region revealed that the burst came from the very edge of an active region of star formation. While it’s still a mystery as to what source could produce FRBs, scientists’ leading hypothesis points to magnetars — young neutron stars with extremely powerful magnetic fields that can spin out high-energy flares across the electromagnetic spectrum, including in the radio band. Physicists suspect that magnetars are found in the center of star formation regions, where the youngest, most active stars are forged. The location of the new FRB, just outside a star-forming region in its galaxy, may suggest that the source of the burst is a slightly older magnetar.

“These are mostly hints,” Masui says. “But the precise localization of this burst is letting us dive into the details of how old an FRB source could be. If it were right in the middle, it would only be thousands of years old — very young for a star. This one, being on the edge, may have had a little more time to bake.”

No repeats

In addition to pinpointing where the new FRB was in the sky, the scientists also looked back through CHIME data to see whether any similar flares occurred in the same region in the past. Since the first FRB was discovered in 2007, astronomers have detected over 4,000 radio flares. Most of these bursts are one-offs. But a few percent have been observed to repeat, flashing every so often. And an even smaller fraction of these repeaters flash in a pattern, like a rhythmic heartbeat, before flaring out. A central question surrounding fast radio bursts is whether repeaters and nonrepeaters come from different origins.

The scientists looked through CHIME’s six years of data and came up empty: This new FRB appears to be a one-off, at least in the last six years. The findings are particularly exciting, given the burst’s proximity. Because it is so close and so bright, scientists can probe the environment in and around the burst for clues to what might produce a nonrepeating FRB.

“Right now we’re in the middle of this story of whether repeating and nonrepeating FRBs are different. These observations are putting together bits and pieces of the puzzle,” Masui says.

“There’s evidence to suggest that not all FRB progenitors are the same,” Andrew adds. “We’re on track to localize hundreds of FRBs every year. The hope is that a larger sample of FRBs localized to their host environments can help reveal the full diversity of these populations.”

The construction of the CHIME Outriggers was funded by the Gordon and Betty Moore Foundation and the U.S. National Science Foundation. The construction of CHIME was funded by the Canada Foundation for Innovation and provinces of Quebec, Ontario, and British Columbia.


Study links rising temperatures and declining moods

An analysis of social media in 157 countries finds hotter weather is associated with more negative sentiments.


Rising global temperatures affect human activity in many ways. Now, a new study illuminates an important dimension of the problem: Very hot days are associated with more negative moods, as shown by a large-scale look at social media postings.

Overall, the study examines 1.2 billion social media posts from 157 countries over the span of a year. The research finds that when the temperature rises above 95 degrees Fahrenheit, or 35 degrees Celsius, expressed sentiments become about 25 percent more negative in lower-income countries and about 8 percent more negative in better-off countries. Extreme heat affects people emotionally, not just physically.

“Our study reveals that rising temperatures don’t just threaten physical health or economic productivity — they also affect how people feel, every day, all over the world,” says Siqi Zheng, a professor in MIT’s Department of Urban Studies and Planning (DUSP) and Center for Real Estate (CRE), and co-author of a new paper detailing the results. “This work opens up a new frontier in understanding how climate stress is shaping human well-being at a planetary scale.”

The paper, “Unequal Impacts of Rising Temperatures on Global Human Sentiment,” is published today in the journal One Earth. The authors are Jianghao Wang, of the Chinese Academy of Sciences; Nicolas Guetta-Jeanrenaud SM ’22, a graduate of MIT’s Technology and Policy Program (TPP) and Institute for Data, Systems, and Society; Juan Palacios, a visiting assistant professor at MIT’s Sustainable Urbanization Lab (SUL) and an assistant professor Maastricht University; Yichun Fan, of SUL and Duke University; Devika Kakkar, of Harvard University; Nick Obradovich, of SUL and the Laureate Institute for Brain Research in Tulsa; and Zheng, who is the STL Champion Professor of Urban and Real Estate Sustainability at CRE and DUSP. Zheng is also the faculty director of CRE and founded the Sustainable Urbanization Lab in 2019.

Social media as a window

To conduct the study, the researchers evaluated 1.2 billion posts from the social media platforms Twitter and Weibo, all of which appeared in 2019. They used a natural language processing technique called Bidirectional Encoder Representations from Transformers (BERT), to analyze 65 languages across the 157 countries in the study.

Each social media post was given a sentiment rating from 0.0 (for very negative posts) to 1.0 (for very positive posts). The posts were then aggregated geographically to 2,988 locations and evaluated in correlation with area weather. From this method, the researchers could then deduce the connection between extreme temperatures and expressed sentiment.

“Social media data provides us with an unprecedented window into human emotions across cultures and continents,” Wang says. “This approach allows us to measure emotional impacts of climate change at a scale that traditional surveys simply cannot achieve, giving us real-time insights into how temperature affects human sentiment worldwide.”

To assess the effects of temperatures on sentiment in higher-income and middle-to-lower-income settings, the scholars also used a World Bank cutoff level of gross national income per-capita annual income of $13,845, finding that in places with incomes below that, the effects of heat on mood were triple those found in economically more robust settings.

“Thanks to the global coverage of our data, we find that people in low- and middle-income countries experience sentiment declines from extreme heat that are three times greater than those in high-income countries,” Fan says. “This underscores the importance of incorporating adaptation into future climate impact projections.”

In the long run

Using long-term global climate models, and expecting some adaptation to heat, the researchers also produced a long-range estimate of the effects of extreme temperatures on sentiment by the year 2100. Extending the current findings to that time frame, they project a 2.3 percent worsening of people’s emotional well-being based on high temperatures alone by then — although that is a far-range projection.

“It's clear now, with our present study adding to findings from prior studies, that weather alters sentiment on a global scale,” Obradovich says. “And as weather and climates change, helping individuals become more resilient to shocks to their emotional states will be an important component of overall societal adaptation.”

The researchers note that there are many nuances to the subject, and room for continued research in this area. For one thing, social media users are not likely to be a perfectly representative portion of the population, with young children and the elderly almost certainly using social media less than other people. However, as the researchers observe in the paper, the very young and elderly are probably particularly vulnerable to heat shocks, making the response to hot weather possible even larger than their study can capture.

The research is part of the Global Sentiment project led by the MIT Sustainable Urbanization Lab, and the study’s dataset is publicly available. Zheng and other co-authors have previously investigated these dynamics using social media, although never before at this scale.

“We hope this resource helps researchers, policymakers, and communities better prepare for a warming world,” Zheng says.

The research was supported, in part, by Zheng’s chaired professorship research fund, and grants Wang received from the National Natural Science Foundation of China and the Chinese Academy of Sciences. 


The “Mississippi Bubble” and the complex history of Haiti

Historian Malick Ghachem’s new book illuminates the pre-revolutionary changes that set Haiti’s long-term economic structure in place.


Many things account for Haiti’s modern troubles. A good perspective on them comes from going back in time to 1715 or so — and grappling with a far-flung narrative involving the French monarchy, a financial speculator named John Law, and a stock-market crash called the “Mississippi Bubble.”

To condense: After the death of Louis XIV in 1715, France was mired in debt following decades of war. The country briefly turned over its economic policy to Law, a Scotsman who implemented a system in which, among other things, French debt was retired while private monopoly companies expanded overseas commerce.

This project did not go entirely as planned. Stock-market speculation created the “Mississippi Bubble” and crash of 1719-20. Amid the chaos, Law lost a short-lived fortune and left France.

Yet Law’s system had lasting effects. French expansionism helped spur Haiti’s “sugar revolution” of the early 1700s, in which the country’s economy first became oriented around labor-intensive sugar plantations. Using enslaved workers and deploying violence against political enemies, plantation owners helped define Haiti’s current-day geography and place within the global economy, creating an extractive system benefitting a select few.

While there has been extensive debate about how the Haitian Revolution of 1789-1804 (and the 1825 “indemnity” Haiti agreed to pay France) has influenced the country’s subsequent path, the events of the early 1700s help illuminate the whole picture.

“This is a moment of transformation for Haiti’s history that most people don’t know much about,” says MIT historian Malick Ghachem. “And it happened well before independence. It goes back to the 18th century when Haiti began to be enmeshed in the debtor-creditor relationships from which it has never really escaped. The 1720s was the period when those relationships crystallized.”

Ghachem examines the economic transformations and multi-sided power struggles of that time in a new book, “The Colony and the Company: Haiti after the Mississippi Bubble,” published this summer by Princeton University Press.

“How did Haiti come to be the way it is today? This is the question everybody asks about it,” says Ghachem. “This book is an intervention in that debate.”

Enmeshed in the crisis

Ghachem is both a professor and head of MIT’s program in history. A trained lawyer, his work ranges across France’s global history and American legal history. His 2012 book “The Old Regime and the Haitian Revolution,” also situated in pre-revolutionary Haiti, examines the legal backdrop of the drive for emancipation.

“The Colony and the Company” draws on original archival research while arriving at two related conclusions: Haiti was a big part of the global bubble of the 1710s, and that bubble and its aftermath is a big part of Haiti’s history.

After all, until the late 1600s, Haiti, then known as Saint Domingue, was “a fragile, mostly ungoverned, and sparsely settled place of uncertain direction,” as Ghachem writes in the book. The establishment of Haiti’s economy is not just the background of later events, but a formative event on its own.

And while the “sugar revolution” may have reached Haiti sooner or later, it was amplified by France’s quest for new sources of revenue. Louis XIV’s military agenda had been a fiscal disaster for the French. Law — a convicted murderer, and evidently a persuasive salesman — proposed a restructuring scheme that concentrated revenue-raising and other fiscal powers in a monopoly overseas trading company and bank overseen by Law himself.

As France sought economic growth beyond its borders, that led the company to Haiti, to tap its agricultural potential. For that matter, as Ghachem details, multiple countries were expanding their overseas activities — and France, Britain, and Spain also increased slave-trading activities markedly. Within a few decades, Haiti was a center of global sugar production, based on slave labor.

“When the company is seen as the answer to France’s own woes, Haiti becomes enmeshed in the crisis,” Ghachem says. “The Mississippi Bubble of 1719-20 was really a global event. And one of the theaters where it played out most dramatically was Haiti.”

As it happens, in Haiti, the dynamics of this were complex. Local planters did not want to be answerable to Law’s company, and fended it off, but, as Ghachem writes,  they “internalized and privatized the financial and economic logic of the System against which they had re­belled, making of it a script for the management of plantation society.”

That society was complex. One of the main elements of “The Colony and the Company” is the exploration of its nuances. Haiti was home to a variety of people, including Jesuit missionaries, European women who had been re-settled there, and maroons (freed or escaped slaves living apart from plantations), among others. Plantation life came with violence, civic instability, and a lack of economic alternatives.

“What’s called the ‘success’ of the colony as a French economic force is really inseparable from the conditions that make it hard for Haiti to survive as an independent nation after the revolution,” Ghachem observes.

Stories in a new light

In public discourse, questions about Haiti’s past are often considered highly relevant to its present, as a near-failed state whose capital city is now substantially controlled by gangs, with no end to violence in sight. Some people draw a through line between the present and Haiti’s revolutionary-era condition. But to Ghachem, the revolution changed some political dynamics, but not the underlying conditions of life in the country.

“One [view] is that it’s the Haitian Revolution that leads to Haiti’s immiseration and violence and political dysfunction and its economic underdevelopment,” Ghachem says. “I think that argument is wrong. It’s an older problem that goes back to Haiti’s relationship with France in the late 17th and early 18th centuries. The revolution compounds that problem, and does so significantly, because of how France responds. But the terms of Haiti’s subordination are already set.”

Other scholars have praised “The Colony and the Company.” Pernille Røge of the University of Pittsburgh has called it “a multilayered and deeply compelling history rooted in a careful analysis of both familiar and unfamiliar primary sources.”

For his part, Ghachem hopes to persuade anyone interested in Haiti’s past and present to look more expansively at the subject, and consider how the deep roots of Haiti’s economy have helped structure its society.

“I’m trying to keep up with the day job of a historian,” Ghachem says. “Which includes finding stories that aren’t well-known, or are well-known and have aspects that are underappreciated, and telling them in a new light.”


Lincoln Laboratory reports on airborne threat mitigation for the NYC subway

Researchers studied air flow characteristics, sensor performance, and mitigation strategies within this complex subway system.


A multiyear program at MIT Lincoln Laboratory to characterize how biological and chemical vapors and aerosols disperse through the New York City subway system is coming to a close. The program, part of the U.S. Department of Homeland Security (DHS) Science and Technology Directorate's Urban Area Security Initiative, builds on other efforts at Lincoln Laboratory to detect chemical and biological threats, validate air dispersion models, and improve emergency protocols in urban areas in case of an airborne attack. The results of this program will inform the New York Metropolitan Transportation Authority (MTA) on how best to install an efficient, cost-effective system for airborne threat detection and mitigation throughout the subway. On a broader scale, the study will help the national security community understand pragmatic chemical and biological defense options for mass transit, critical facilities, and special events.

Trina Vian from the laboratory's Counter–Weapons of Mass Destruction (WMD) Systems Group led this project, which she says had as much to do with air flow and sensors as it did with MTA protocols and NYC commuters. "There are real dangers associated with panic during an alarm. People can get hurt during mass evacuation, or lose trust in a system and the authorities that administer that system, if there are false alarms," she says. "A novel aspect of our project was to investigate effective low-regret response options, meaning those with little operational consequence to responding to a false alarm."

Currently, depending on the severity of the alarm, the MTA's response can include stopping service and evacuating passengers and employees.

A complex environment for testing

For the program, which started in 2019, Vian and her team collected data on how chemical and biological sensors performed in the subway, what factors affected sensor accuracy, and how different mitigation protocols fared in stopping an airborne threat from spreading and removing the threat from a contaminated location. For their tests, they released batches of a safe, custom-developed aerosol simulant within Grand Central Station that they could track with DNA barcodes. Each batch had a different barcode, which allowed the team to differentiate among them and quantitatively assess different combinations of mitigation strategies.

To control and isolate air flow, the team tested static air curtains as well as air filtration systems. They also tested a spray knockdown system developed by Sandia National Laboratories designed to reduce and isolate particulate hazards in large volume areas. The system sprays a fine water mist into the tunnels that attaches to threat particulates and uses gravity to rain out the threat material. The spray contains droplets of a particular size and concentration, and with an applied electrostatic field. The original idea for the system was adapted from the coal mining industry, which used liquid sprayers to reduce the amount of inhalable soot.

The tests were done in a busy environment, and the team was required to complete trainings on MTA protocols such as track safety and how to interact with the public.

"We had long and sometimes very dirty days," says Jason Han of the Counter–WMD Systems Group, who collected measurements in the tunnels and analyzed the data. "We all wore bright orange contractor safety vests, which made people think we were official employees of the MTA. We would often get approached by people asking for directions!"

At times, issues such as power outages or database errors could disrupt data capture.

"We learned fairly early on that we had to capture daily data backups and keep a daily evolving master list of unique sensor identifiers and locations," says fellow team member Cassie Smith. "We developed workflows and wrote scripts to help automate the process, which ensured successful sensor data capture and attribution."

The team also worked closely with the MTA to make sure their tests and data capture ran smoothly. "The MTA was great at helping us maintain the test bed, doing as much as they could in our physical absence," Vian says.

Calling on industry

Another crucial aspect of the program was to connect with the greater chemical and biological industrial community to solicit their sensors for testing. These partnerships reduced the cost for DHS to bring new sensing technologies into the project, and, in return, participants gained a testing and data collection opportunity within the challenging NYC subway environment.

The team ultimately fielded 16 different sensors, each with varying degrees of maturity, that operated through a range of methods, such as ultraviolet laser–induced fluorescence, polymerase chain reaction, and long-wave infrared spectrometry.

"The partners appreciated the unique data they got and the opportunity to work with the MTA and experience an environment and customer base that they may not have anticipated before," Vian says.

The team finished testing in 2024 and has delivered the final report to the DHS. The MTA will use the report to help expand their PROTECT chemical detection system (originally developed by Argonne National Laboratory) from Grand Central Station into adjacent stations. They expect to complete this work in 2026.

"The value of this program cannot be overstated. This partnership with DHS and MIT Lincoln Laboratory has led to the identification of the best-suited systems for the MTA’s unique operating environment," says Michael Gemelli, director of chemical, biological, radiological, and nuclear/WMD detection and mitigation at the New York MTA.

"Other transit authorities can leverage these results to start building effective chemical and biological defense systems for their own specific spaces and threat priorities," adds Benjamin Ervin, leader of Lincoln Laboratory's Counter–WMD Systems Group. "Specific test and evaluation within the operational environment of interest, however, is always recommended to ensure defense system objectives are met."

Building these types of decision-making reports for airborne chemical and biological sensing has been a part of Lincoln Laboratory's mission since the mid-1990s. The laboratory also helped to define priorities in the field when DHS was forming in the early 2000s.

Beyond this study, the Lincoln Laboratory is leading several other projects focused on forecasting the impact of novel chemical and biological threats within multiple domains — military, space, agriculture, health, etc. — and on prototyping rapid, autonomous, high-confidence biological identification capabilities for the homeland to provide actionable evidence of hazardous environments.


Learning from punishment

A new computational model makes sense of the cognitive processes humans use to evaluate punishment.


From toddlers’ timeouts to criminals’ prison sentences, punishment reinforces social norms, making it known that an offender has done something unacceptable. At least, that is usually the intent — but the strategy can backfire. When a punishment is perceived as too harsh, observers can be left with the impression that an authority figure is motivated by something other than justice.

It can be hard to predict what people will take away from a particular punishment, because everyone makes their own inferences not just about the acceptability of the act that led to the punishment, but also the legitimacy of the authority who imposed it. A new computational model developed by scientists at MIT’s McGovern Institute for Brain Research makes sense of these complicated cognitive processes, recreating the ways people learn from punishment and revealing how their reasoning is shaped by their prior beliefs.

Their work, reported Aug. 4 in the journal PNAS, explains how a single punishment can send different messages to different people, and even strengthen the opposing viewpoints of groups who hold different opinions about authorities or social norms.

“The key intuition in this model is the fact that you have to be evaluating simultaneously both the norm to be learned and the authority who’s punishing,” says McGovern investigator and John W. Jarve Professor of Brain and Cognitive Sciences Rebecca Saxe, who led the research. “One really important consequence of that is even where nobody disagrees about the facts — everybody knows what action happened, who punished it, and what they did to punish it — different observers of the same situation could come to different conclusions.”

For example, she says, a child who is sent to timeout after biting a sibling might interpret the event differently than the parent. One might see the punishment as proportional and important, teaching the child not to bite. But if the biting, to the toddler, seemed a reasonable tactic in the midst of a squabble, the punishment might be seen as unfair, and the lesson will be lost.

People draw on their own knowledge and opinions when they evaluate these situations — but to study how the brain interprets punishment, Saxe and graduate student Setayesh Radkani wanted to take those personal ideas out of the equation. They needed a clear understanding of the beliefs that people held when they observed a punishment, so they could learn how different kinds of information altered their perceptions. So Radkani set up scenarios in imaginary villages where authorities punished individuals for actions that had no obvious analog in the real world.

Participants observed these scenarios in a series of experiments, with different information offered in each one. In some cases, for example, participants were told that the person being punished was either an ally or a competitor of the authority, whereas in other cases, the authority’s possible bias was left ambiguous.

“That gives us a really controlled setup to vary prior beliefs,” Radkani explains. “We could ask what people learn from observing punitive decisions with different severities, in response to acts that vary in their level of wrongness, by authorities that vary in their level of different motives.”

For each scenario, participants were asked to evaluate four factors: how much the authority figure cared about justice; the selfishness of the authority; the authority’s bias for or against the individual being punished; and the wrongness of the punished act. The research team asked these questions when participants were first introduced to the hypothetical society, then tracked how their responses changed after they observed the punishment. Across the scenarios, participants’ initial beliefs about the authority and the wrongness of the act shaped the extent to which those beliefs shifted after they observed the punishment.

Radkani was able to replicate these nuanced interpretations using a cognitive model framed around an idea that Saxe’s team has long used to think about how people interpret the actions of others. That is, to make inferences about others’ intentions and beliefs, we assume that people choose actions that they expect will help them achieve their goals.

To apply that concept to the punishment scenarios, Radkani developed a model that evaluates the meaning of a punishment (an action aimed at achieving a goal of the authority) by considering the harm associated with that punishment; its costs or benefits to the authority; and its proportionality to the violation. By assessing these factors, along with prior beliefs about the authority and the punished act, the model was able to predict people’s responses to the hypothetical punishment scenarios, supporting the idea that people use a similar mental model. “You need to have them consider those things, or you can’t make sense of how people understand punishment when they observe it,” Saxe says.

Even though the team designed their experiments to preclude preconceived ideas about the people and actions in their imaginary villages, not everyone drew the same conclusions from the punishments they observed. Saxe’s group found that participants’ general attitudes toward authority influenced their interpretation of events. Those with more authoritarian attitudes — assessed through a standard survey — tended to judge punished acts as more wrong and authorities as more motivated by justice than other observers.

“If we differ from other people, there’s a knee-jerk tendency to say, ‘either they have different evidence from us, or they’re crazy,’” Saxe says. Instead, she says, “It’s part of the way humans think about each other’s actions.”

“When a group of people who start out with different prior beliefs get shared evidence, they will not end up necessarily with shared beliefs. That’s true even if everybody is behaving rationally,” says Saxe.

This way of thinking also means that the same action can simultaneously strengthen opposing viewpoints. The Saxe lab’s modeling and experiments showed that when those viewpoints shape individuals’ interpretations of future punishments, the groups’ opinions will continue to diverge. For instance, a punishment that seems too harsh to a group who suspects an authority is biased can make that group even more skeptical of the authority’s future actions. Meanwhile, people who see the same punishment as fair and the authority as just will be more likely to conclude that the authority figure’s future actions are also just. 

“You will get a vicious cycle of polarization, staying and actually spreading to new things,” says Radkani.

The researchers say their findings point toward strategies for communicating social norms through punishment. “It is exactly sensible in our model to do everything you can to make your action look like it’s coming out of a place of care for the long-term outcome of this individual, and that it’s proportional to the norm violation they did,” Saxe says. “That is your best shot at getting a punishment interpreted pedagogically, rather than as evidence that you’re a bully.”

Nevertheless, she says that won’t always be enough. “If the beliefs are strong the other way, it’s very hard to punish and still sustain a belief that you were motivated by justice.”

Joining Saxe and Radkani on the paper is Joshua Tenenbaum, MIT professor of brain and cognitive sciences. The study was funded, in part, by the Patrick J McGovern Foundation.


A boost for the precision of genome editing

Researchers develop a fast-acting, cell-permeable protein system to control CRISPR-Cas9, reducing off-target effects and advancing gene therapy.


The U.S. Food and Drug Administration’s recent approval of the first CRISPR-Cas9–based gene therapy has marked a major milestone in biomedicine, validating genome editing as a promising treatment strategy for disorders like sickle cell disease, muscular dystrophy, and certain cancers.

CRISPR-Cas9, often likened to “molecular scissors,” allows scientists to cut DNA at targeted sites to snip, repair, or replace genes. But despite its power, Cas9 poses a critical safety risk: The active enzyme can linger in cells and cause unintended DNA breaks — so-called off-target effects — which may trigger harmful mutations in healthy genes.

Now, researchers in the labs of Ronald T. Raines, MIT professor of chemistry, and Amit Choudhary, professor of medicine at Harvard Medical School, have engineered a precise way to turn Cas9 off after its job is done — significantly reducing off-target effects and improving the clinical safety of gene editing. Their findings are detailed in a new paper published in the Proceedings of the National Academy of Sciences (PNAS).

“To ‘turn off’ Cas9 after it achieves its intended genome-editing outcome, we developed the first cell-permeable anti-CRISPR protein system,” says Raines, the Roger and Georges Firmenich Professor of Natural Products Chemistry. “Our technology reduces the off-target activity of Cas9 and increases its genome-editing specificity and clinical utility.”

The new tool — called LFN-Acr/PA — uses a protein-based delivery system to ferry anti-CRISPR proteins into human cells rapidly and efficiently. While natural Type II anti-CRISPR proteins (Acrs) are known to inhibit Cas9, their use in therapy has been limited because they’re often too bulky or charged to enter cells, and conventional delivery methods are too slow or ineffective.

LFN-Acr/PA overcomes these hurdles using a component derived from anthrax toxin to introduce Acrs into cells within minutes. Even at picomolar concentrations, the system shuts down Cas9 activity with remarkable speed and precision — boosting genome-editing specificity up to 40 percent.

Bradley L. Pentelute, MIT professor of chemistry, is an expert on the anthrax delivery system, and is also an author of the paper.

The implications of this advance are wide-ranging. With patent applications filed, LFN-Acr/PA represents a faster, safer, and more controllable means of harnessing CRISPR-Cas9, opening the door to more-refined gene therapies with fewer unintended consequences.

The research was supported by the National Institutes of Health and a Gilliam Fellowship from the Howard Hughes Medical Institute awarded to lead author Axel O. Vera, a graduate student in the Department of Chemistry.


Materials Research Laboratory: Driving interdisciplinary materials research at MIT

The MRL helps bring together academia, government, and industry to accelerate innovation in sustainability, energy, and advanced materials.


Materials research thrives across MIT, spanning disciplines and departments. Recent breakthroughs include strategies for securing sustainable supplies of nickel — critical to clean-energy technologies (Department of Materials Science and Engineering); the discovery of unexpected magnetism in atomically thin quantum materials (Department of Physics); and the development of adhesive coatings that reduce scarring around medical implants (departments of Mechanical Engineering and Civil and Environmental Engineering).

At the center of these efforts is the Materials Research Laboratory (MRL), a hub that connects and supports the Institute’s materials research community. “MRL serves as a home for the entire materials research community at MIT,” says C. Cem Tasan, who became director in April 2025. “Our goal is to make it easier for our faculty to conduct their extraordinary research,” adds Tasan, the POSCO Associate Professor of Metallurgy in the Department of Materials Science and Engineering.

A storied history

Established in 2017, the MRL brings together more than 30 researchers and builds on a 48-year legacy of innovation. It was formed through the merger of the MIT Materials Processing Center (MPC) and the Center for Materials Science and Engineering (CMSE), two institutions that helped lay the foundation for MIT’s global leadership in materials science.

Over the years, research supported by MPC and CMSE has led to transformative technologies and successful spinout companies. Notable examples include amsc, based on advances in superconductivity; OmniGuide, which developed cutting-edge optical fiber technologies; and QD Vision, a pioneer in quantum dot technology acquired by Samsung in 2016. Another landmark achievement was the development of the first germanium laser to operate at room temperature — a breakthrough now used in optical communications.

Enabling research through partnership and support

MRL is launching targeted initiatives to connect MIT researchers with industry partners around specific technical challenges. Each initiative will be led by a junior faculty member working closely with MRL to identify a problem that aligns with their research expertise and is relevant to industry needs.

Through multi-year collaborations with participating companies, faculty can explore early-stage solutions in partnership with postdocs or graduate students. These initiatives are designed to be agile and interdisciplinary, with the potential to grow into major, long-term research programs.

Behind-the-scenes support, front-line impact

MRL provides critical infrastructure that enables faculty to focus on discovery, not logistics. “MRL works silently in the background, where every problem a principal investigator has related to the administration of materials research is solved with efficiency, good organization, and minimum effort,” says Tasan.

This quiet but powerful support spans multiple areas:

Together, these functions ensure that research at MRL runs smoothly and effectively — from initial idea to lasting innovation.

Leadership with a vision

Tasan, who also leads a research group focused on metallurgy, says he took on the directorship because “I thrive on new challenges.” He also saw the role as an opportunity to contribute more broadly to MIT. 

“I believe MRL can play an even greater role in advancing materials research across the Institute, and I’m excited to help make that happen,” he says.


New laser “comb” can enable rapid identification of chemicals with extreme precision

The ultrabroadband infrared frequency comb could be used for chemical detection in portable spectrometers or high-resolution remote sensors.


Optical frequency combs are specially designed lasers that act like rulers to accurately and rapidly measure specific frequencies of light. They can be used to detect and identify chemicals and pollutants with extremely high precision.

Frequency combs would be ideal for remote sensors or portable spectrometers because they can enable accurate, real-time monitoring of multiple chemicals without complex moving parts or external equipment.

But developing frequency combs with high enough bandwidth for these applications has been a challenge. Often, researchers must add bulky components that limit scalability and performance.

Now, a team of MIT researchers has demonstrated a compact, fully integrated device that uses a carefully crafted mirror to generate a stable frequency comb with very broad bandwidth. The mirror they developed, along with an on-chip measurement platform, offers the scalability and flexibility needed for mass-producible remote sensors and portable spectrometers. This development could enable more accurate environmental monitors that can identify multiple harmful chemicals from trace gases in the atmosphere.

“The broader the bandwidth a spectrometer has, the more powerful it is, but dispersion is in the way. Here we took the hardest problem that limits bandwidth and made it the centerpiece of our study, addressing every step to ensure robust frequency comb operation,” says Qing Hu, Distinguished Professor in Electrical Engineering and Computer Science at MIT, principal investigator in the Research Laboratory of Electronics, and senior author on an open-access paper describing the work.

He is joined on the paper by lead author Tianyi Zeng PhD ’23; as well as Yamac Dikmelik of General Dynamics Mission Systems; Feng Xie and Kevin Lascola of Thorlabs Quantum Electronics; and David Burghoff SM ’09, PhD ’14, an assistant professor at the University of Texas at Austin. The research appears today in Light: Science and Applications.

Broadband combs

An optical frequency comb produces a spectrum of equally spaced laser lines, which resemble the teeth of a comb.

Scientists can generate frequency combs using several types of lasers for different wavelengths. By using a laser that produces long wave infrared radiation, such as a quantum cascade laser, they can use frequency combs for high-resolution sensing and spectroscopy.

In dual-comb spectroscopy (DCS), the beam of one frequency comb travels straight through the system and strikes a detector at the other end. The beam of the second frequency comb passes through a chemical sample before striking the same detector. Using the results from both combs, scientists can faithfully replicate the chemical features of the sample at much lower frequencies, where signals can be easily analyzed.

The frequency combs must have high bandwidth, or they will only be able to detect a small frequency range of chemical compounds, which could lead to false alarms or inaccurate results.

Dispersion is the most important factor that limits a frequency comb’s bandwidth. If there is dispersion, the laser lines are not evenly spaced, which is incompatible with the formation of frequency combs.

“With long wave infrared radiation, the dispersion will be very high. There is no way to get around it, so we have to find a way to compensate for it or counteract it by engineering our system,” Hu says.

Many existing approaches aren’t flexible enough to be used in different scenarios or don’t enable high enough bandwidth.

Hu’s group previously solved this problem in a different type of frequency comb, one that used terahertz waves, by developing a double-chirped mirror (DCM).

A DCM is a special type of optical mirror that has multiple layers with thicknesses that change gradually from one end to the other. They found that this DCM, which has a corrugated structure, could effectively compensate for dispersion when used with a terahertz laser.

“We tried to borrow this trick and apply it to an infrared comb, but we ran into lots of challenges,” Hu says.

Because infrared waves are 10 times shorter than terahertz waves, fabricating the new mirror required an extreme level of precision. At the same time, they needed to coat the entire DCM in a thick layer of gold to remove the heat under laser operation. Plus, their dispersion measurement system, designed for terahertz waves, wouldn’t work with infrared waves, which have frequencies that are about 10 times higher than terahertz.

“After more than two years of trying to implement this scheme, we reached a dead end,” Hu says.

A new solution

Ready to throw in the towel, the team realized something they had missed. They had designed the mirror with corrugation to compensate for the lossy terahertz laser, but infrared radiation sources aren’t as lossy.

This meant they could use a standard DCM design to compensate for dispersion, which is compatible with infrared radiation. However, they still needed to create curved mirror layers to capture the beam of the laser, which made fabrication much more difficult than usual.

“The adjacent layers of mirror differ only by tens of nanometers. That level of precision precludes standard photolithography techniques. On top of that, we still had to etch very deeply into the notoriously stubborn material stacks. Achieving those critical dimensions and etch depths was key to unlocking broadband comb performance,” Zeng says. In addition to precisely fabricating the DCM, they integrated the mirror directly onto the laser, making the device extremely compact. The team also developed a high-resolution, on-chip dispersion measurement platform that doesn’t require bulky external equipment.

“Our approach is flexible. As long as we can use our platform to measure the dispersion, we can design and fabricate a DCM that compensates for it,” Hu adds.

Taken together, the DCM and on-chip measurement platform enabled the team to generate stable infrared laser frequency combs that had far greater bandwidth than can usually be achieved without a DCM.

In the future, the researchers want to extend their approach to other laser platforms that could generate combs with even greater bandwidth and higher power for more demanding applications.

“These researchers developed an ingenious nanophotonic dispersion compensation scheme based on an integrated air–dielectric double-chirped mirror. This approach provides unprecedented control over dispersion, enabling broadband comb formation at room temperature in the long-wave infrared. Their work opens the door to practical, chip-scale frequency combs for applications ranging from chemical sensing to free-space communications,” says Jacob B. Khurgin, a professor at the Johns Hopkins University Whiting School of Engineering, who was not involved with this paper.

This work is funded, in part, by the U.S. Defense Advanced Research Projects Agency (DARPA) and the Gordon and Betty Moore Foundation. This work was carried out, in part, using facilities at MIT.nano.


Graduate work with an impact — in big cities and on campus

PhD student Nick Allen has helped mainstream new tax-reform concepts for policymakers, while working to enhance MIT grad-school life.


While working to boost economic development in Detroit in the late 2010s, Nick Allen found he was running up against a problem.

The city was trying to spur more investment after long-term industrial flight to suburbs and other states. Relying more heavily on property taxes for revenue, the city was negotiating individualized tax deals with prospective businesses. That’s hardly a scenario unique to Detroit, but such deals involved lengthy approval processes that slowed investment decisions and made smaller projects seem unrealistic. 

Moreover, while creating small pockets of growth, these individualized tax abatements were not changing the city’s broader fiscal structure. They also favored those with leverage and resources to work the system for a break.

“The thing you really don’t want to do with taxes is have very particular, highly procedural ways of adjusting the burdens,” says Allen, now a doctoral student in MIT’s Department of Urban Studies and Planning (DUSP). “You want a simple process that fits people’s ideas about what fairness looks like.”

So, after starting his PhD program at MIT, Allen kept studying urban fiscal policy. Along with a group of other scholars, he has produced research papers making the case for a land-value tax — a common tax rate on land that, combined with reduced property taxes, could raise more local revenue by encouraging more city-wide investment, even while lowering tax burdens on residents and businesses. As a bonus, it could also reduce foreclosures.

In the last few years, this has become a larger topic in urban policy circles. The mayor of Detroit has endorsed the idea. The New York Times has written about the work of Allen and his colleagues. The land-value tax is now a serious policy option.

It is unusual for a graduate student to have their work become part of a prominent policy debate. But then, Allen is an unusual student. At MIT, he has not just conducted influential research in his field, but thrown himself into campus-based work with substantial impact as well. Allen has served on task forces assessing student stipend policy, expanding campus housing, and generating ideas for dining program reform.

For all these efforts, in May, Allen received the Karl Taylor Compton Prize, MIT’s highest student honor. At the ceremony, MIT Chancellor Melissa Nobles observed that Allen’s work helped Institute stakeholders “fully understand complex issues, ensuring his recommendations are not only well-informed but also practical and impactful.”

Looking to revive growth

Allen is a Minnesota native who received his BA from Yale University. In 2015, he enrolled in graduate school at MIT, receiving his master’s in city planning from DUSP in 2017. At the time, Allen worked on the Malaysia Sustainable Cities Project, headed by Professor Lawrence Susskind. At one point Allen spent a couple of months in a small Malaysian village studying the effects of coastal development on local fishing and farming.

Malaysia may be different than Michigan, but the issues that Allen encountered in Asia were similar to the ones he wanted to keep studying back in the U.S.: finding ways to finance growth.

“The core interests I have are around real estate, the physical environment, and these fiscal policy questions of how this all gets funded and what the responsibilities are of the state and private markets,” Allen says. “And that brought me to Detroit.”

Specifically, that landed him at the Detroit Economic Growth Corporation, a city-charted development agency that works to facilitate new investment. There, Allen started grappling with the city’s revenue problems. Once heralded as the richest city in America, Detroit has seen a lot of property go vacant, and has hiked property taxes on existing structures to compensate for that. Those rates then discouraged further investment and building.

To be sure, the challenges Detroit has faced stem from far more than tax policy and relate to many macroscale socioeconomic factors, including suburban flight, the shift of manufacturing to states with nonunion employees, and much more. But changing tax policy can be one lever to pull in response.

“It’s difficult to figure out how to revive growth in a place that’s been cannibalized by its losses,” Allen says.

Tasked with underwriting real estate projects, Allen started cataloguing the problems arising from Detroit’s property tax reliance, and began looking at past economics work on optimal tax policy in search of alternatives.

“There’s a real nose-to-the-ground empiricism you start with, asking why we have a system nobody would choose,” Allen says. “There were two parts to that, for me. One was initially looking at the difficulty of making individual projects work, from affordable housing to big industrial plants, along with, secondly, this wave of tax foreclosures in the city.”

Engineering, but for policy

After two years in Detroit, Allen returned to MIT, this time as a doctoral student in DUSP and with a research program oriented around the issues he had worked on. In pursuing that, Allen has worked closely with John E. Anderson, an economist at the University of Nebraska at Lincoln. With a nationwide team of economists convened by the Lincoln Institute of Land Policy, they worked to address the city’s questions on property tax reform.

One paper used current data to show that a land-value tax should lower tax-connected foreclosures in the city. Two other papers study the use of the tax in certain parts of Pennsylvania, one of the few states where it has been deployed. There, the researchers concluded, the land-value tax both leads to greater business development and raises property values.

“What we found overall, looking at past tax reduction in Detroit and other cities, is that in reducing the rate at which people in deep tax distress go through foreclosure, it has a fairly large effect,” Allen says. “It has some effect on allowing business to reinvest in properties. We are seeing a lot more attraction of investment. And it’s got the virtue of being a rules-based system.”

Those empirical results, he notes, helped confirm the sense that a policy change could help growth in Detroit.

“That really validated the hunch we were following,” Allen says.

The widespread attention the policy proposal has garnered could not really have been predicted. The tax has not yet been implemented in Detroit, although it has been a prominent part of civic debates there. Allen has been asked to consult on tax policy by officials in numerous large cities, and is hopeful the concept will gain still more traction.

Meanwhile, at MIT, Allen has one more year to go in his doctoral program. On top of his academic research, he has been an active participant in Institute matters, helping reshape graduate-school policies on multiple fronts.

For instance, Allen was part of the Graduate Housing Working Group, whose efforts helped spur MIT to build Graduate Junction, a new housing complex for 675 graduate students on Vassar Street in Cambridge, Massachusetts. The name also refers to the Grand Junction rail line that runs nearby; the complex formally opened in 2024.

“Innovative places struggle to build housing fast enough,” Allen said at the time Graduate Junction opened, also noting that “new housing for students reduces price pressure on the rest of the Cambridge community.”

Commenting on it now, he adds, “Maybe to most people graduate housing policy doesn’t sound that fun, but to me these are very absorbing questions.”

And ultimately, Allen says, the intellectual problems in either domain can be similar, whether he is working on city policy issues or campus enhancements.

“The reason I think planning fits so well here at MIT is, a lot of what I do is like policy engineering,” Allen says. “It’s really important to understand system constraints, and think seriously about finding solutions that can be built to purpose. I think that’s why I’ve felt at home here at MIT, working on these outside public policy topics, and projects for the Institute. You need to take seriously what people say about the constraints in their lives.”


Professor John Joannopoulos, photonics pioneer and Institute for Soldier Nanotechnologies director, dies at 78

Over 50 years at MIT, the condensed-matter physicist led the development of photonic crystals, translating discoveries into wide-ranging applications in energy, medicine, and defense.


John “JJ” Joannopoulos, the Francis Wright Davis Professor of Physics at MIT and director of the MIT Institute for Soldier Nanotechnologies (ISN), passed away on Aug. 17. He was 78. 

Joannopoulos was a prolific researcher in the field of theoretical condensed-matter physics, and an early pioneer in the study and application of photonic crystals. Many of his discoveries, in the ways materials can be made to manipulate light, have led to transformative and life-saving technologies, from chip-based optical wave guides, to wireless energy transfer to health-monitoring textiles, to precision light-based surgical tools.

His remarkable career of over 50 years was spent entirely at MIT, where he was known as much for his generous and unwavering mentorship as for his contributions to science. He made a special point to keep up rich and meaningful collaborations with many of his former students and postdocs, dozens of whom have gone on to faculty positions at major universities, and to leadership roles in the public and private sectors. In his five decades at MIT, he made lasting connections across campus, both in service of science, and friendship.

“A scientific giant, inspiring leader, and a masterful communicator, John carried a generous and loving heart,” says Yoel Fink PhD ’00, an MIT professor of materials science and engineering who was Joannopoulos’ former student and a longtime collaborator. “He chose to see the good in people, keeping his mind and heart always open. Asking little for himself, he gave everything in care of others. John lived a life of deep impact and meaning — savoring the details of truth-seeking, achieving rare discoveries and mentoring generations of students to achieve excellence. With warmth, humor, and a never-ending optimism, JJ left an indelible impact on science and on all who had the privilege to know him. Above all, he was a loving husband, father, grandfather, friend, and mentor.”

“In the end, the most remarkable thing about him was his unmatched humanity, his ability to make you feel that you were the most important thing in the world that deserved his attention, no matter who you were,” says Raul Radovitzky, ISN associate director and the Jerome C. Hunsaker Professor in MIT’s Department of Aeronautics and Astronautics. “The legacy he leaves is not only in equations and innovations, but in the lives he touched, the minds he inspired, and the warmth he spread in every room he entered.”

“JJ was a very special colleague: a brilliant theorist who was also adept at identifying practical applications; a caring and inspiring mentor of younger scientists; a gifted teacher who knew every student in his class by name,” says Deepto Chakrabarty ’88, the William A. M. Burden Professor in Astrophysics and head of MIT’s Department of Physics. “He will be deeply missed.”

Layers of light

John Joannopoulos was born in 1947 in New York City, where his parents both emigrated from Greece. His father was a playwright, and his mother worked as a psychologist. From an early age, Joannopoulos knew he wanted to be a physicist — mainly because the subject was his most challenging in school. In a recent interview with MIT News, he enthusiastically shared: “You probably wouldn’t believe this, but it’s true: I wanted to be a physics professor since I was in high school! I loved the idea of being able to work with students, and being able to have ideas.”

He attended the University of California at Berkeley, where he received a bachelor’s degree in 1968, and a PhD in 1974, both in physics. That same year, he joined the faculty at MIT, where he would spend his 50-plus-year career — though at the time, the chances of gaining a long-term foothold at the Institute seemed slim, as Joannopoulos told MIT News.

“The chair of the physics department was the famous nuclear physicist, Herman Feshbach, who told me the probability that I would get tenure was something like 30 percent,” Joannopoulos recalled. “But when you’re young and just starting off, it was certainly better than zero, and I thought, that was fine — there was hope down the line.”

Starting out at MIT, Joannopoulos knew exactly what he wanted to do. He quickly set up a group to study theoretical condensed-matter physics, and specifically, ab initio physics, meaning physics “from first principles.” In this initial work, he sought to build theoretical models to predict the electronic behavior and structure of materials, based solely on the atomic numbers of the atoms in a material. Such foundational models could be applied to understand and design a huge range of materials and structures.

Then, in the early 1990s, Joannopoulos took a research turn, spurred by a paper by physicist Eli Yablonovitch at the University of California at Los Angeles, who did some preliminary work on materials that can affect the behavior of photons, or particles of light. Joannopoulos recognized a connection with his first-principles work with electrons. Along with his students, he applied that approach to predict the fundamental behavior of photons in different classes of materials. His group was one of the first to pioneer the field of photonic crystals, and the study of how materials can be manipulated at the nanoscale to control the behavior of light traveling through. In 1995, Joannopoulos co-authored the first textbook on the subject.

And in 1998, he took on a more-than-century-old assumption about how light should reflect, and turned it on its head. That assumption predicted that light, shining onto a structure made of multiple refractive layers, could reflect back, but only for a limited range of angles. But in fact, Joannopoulos and his group showed that the opposite is true: If the structure’s layers followed a particular design criteria, the structure as a whole could reflect light coming from any and all angles. This structure, was called the “perfect mirror.”

That insight led to another: If the structure were rolled into a tube, the resulting hollow fiber could act as a perfect optical conduit. Any light traveling through the fiber would reflect and bounce around within the fiber, with none scattering away. Joannopoulos and his group applied this insight to develop the first precision “optical scalpel” — a fiber that can be safely handled, while delivering a highly focused laser, precise and powerful enough to perform delicate surgical procedures. Joannopoulos helped to commercialize the new tool with a startup, Omniguide, that has since provided the optical scalpel to assist in hundreds of thousands of medical procedures around the world.

Legendary mentor

In 2006, Joannopoulos took the helm as director of MIT’s Institute for Soldier Nanotechnologies — a post he steadfastly held for almost 20 years. During his dedicated tenure, he worked with ISN members across campus and in departments outside his own, getting to know and champion their work. He has facilitated countless collaborations between MIT faculty, industry partners, and the U.S. Department of Defense. Among the many projects he raised support for were innovations in lightweight armor, hyperspectral imaging, energy-efficient batteries, and smart and responsive fabrics.

Joannopoulos helped to translate many basic science insights into practical applications. He was a cofounder of six spinoff companies based on his fundamental research, and helped to create dozens more companies, which have advanced technologies as wide-ranging as laser surgery tools, to wireless electric power transmission, transparent display technologies, and optical computing. He was awarded 126 patents for his many discoveries, and has authored over 750 peer-reviewed papers.

In recognition of his wide impact and contributions, Joannopoulos was elected to the National Academy of Sciences and the American Academy of Arts and Sciences. He was also a fellow of both the American Physical Society and the American Association for the Advancement of Science. Over his 50-plus-year career, he was the recipient of many scientific awards and honors including the Max Born Award, and the Aneesur Rahman Prize in Computational Physics. Joannopoulos was also a gifted classroom teacher, and was recognized at MIT with the Buechner Teaching Prize in Physics and the Graduate Teaching Award in Science.

This year, Joannopoulos was the recipient of MIT’s Killian Achievement Award, which recognizes the extraordinary lifetime contributions of a member of the MIT faculty. In addition to the many accomplishments Joannopoulos has made in science, the award citation emphasized his lasting impact on the generations of students he has mentored:

“Professor Joannopoulos has served as a legendary mentor to generations of students, inspiring them to achieve excellence in science while at the same time facilitating the practical benefit to society through entrepreneurship,” the citation reads. “Through all of these individuals he has impacted — not to mention their academic descendants — Professor Joannopoulos has had a vast influence on the development of science in recent decades.”

“JJ was an amazing scientist: He published hundreds of papers that have been cited close to 200,000 times. He was also a serial entrepreneur: Companies he cofounded raised hundreds of millions of dollars and employed hundreds of people,” says MIT Professor Marin Soljacic ’96, a former postdoc under Joannopoulos who with him cofounded a startup, Witricity. “He was an amazing mentor, a close friend, and like a scientific father to me. He always had time for me, any time of the day, and as much as I needed.”

Indeed, Joannopoulos strived to meaningfully support his many students. In the classroom, he “was legendary,” says friend and colleague Patrick Lee ’66, PhD ’70, who recalls that Joannopoulos would make a point of memorizing the names and faces of more than 100 students on the first day of class, and calling them each by their first name, starting on the second day, and for the rest of the term.

What’s more, Joannopoulos encouraged graduate students and postdocs to follow their ideas, even when they ran counter to his own.

“John did not produce clones,” says Lee, who is an MIT professor emeritus of physics. “He showed them the way to do science by example, by caring and by sharing his optimism. I have never seen someone so deeply loved by his students.”

Even students who stepped off the photonics path have kept in close contact with their mentor, as former student and MIT professor Josh Winn ’94, SM ’94, PhD ’01 has done.

“Even though our work together ended more than 25 years ago, and I now work in a different field, I still feel like part of the Joannopoulos academic family,” says Winn, who is now a professor of astrophysics at Princeton University. “It's a loyal group with branches all over the world. We even had our own series of conferences, organized by former students to celebrate John's 50th, 60th, and 70th birthdays. Most professors would consider themselves fortunate to have even one such ‘festschrift’ honoring their legacy.”

MIT professor of mathematics Steven Johnson ’95, PhD ’01, a former student and frequent collaborator, has experienced personally, and seen many times over, Joannopoulos’ generous and open-door mentorship.

“In every collaboration, I’ve unfailingly observed him to cast a wide net to value multiple voices, to ensure that everyone feels included and valued, and to encourage collaborations across groups and fields and institutions,” Johnson says. “Kind, generous, and brimming with infectious enthusiasm and positivity, he set an example so many of his lucky students have striven to follow.”

Joannopoulos started at MIT around the same time as Marc Kastner, who had a nearby office on the second floor of Building 13.

“I would often hear loud arguments punctuated by boisterous laughter, coming from John’s office, where he and his students were debating physics,” recalls Kastner, who is the Donner Professor of Physics Emeritus at MIT. “I am sure this style of interaction is what made him such a great mentor.”

“He exuded such enthusiasm for science and good will to others that he was just good fun to be around,” adds friend and colleague Erich Ippen, MIT professor emeritus of physics.

“John was indeed a great man — a very special one. Everyone who ever worked with him understands this,” says Stanford University physics professor Robert Laughlin PhD ’79, one of Joannopoulos’ first graduate students, who went on to win the 1998 Nobel Prize in Physics. “He sprinkled a kind of transformative magic dust on people that induced them to dedicate every waking moment to the task of making new and wonderful things. You can find traces of it in lots of places around the world that matter, all of them the better for it. There’s quite a pile of it in my office.”

Joannopoulos is survived by his wife, Kyri Dunussi-Joannopoulos; their three daughters, Maria, Lena, and Alkisti; and their families. Details for funeral and memorial services are forthcoming.


A new model predicts how molecules will dissolve in different solvents

Solubility predictions could make it easier to design and synthesize new drugs, while minimizing the use of more hazardous solvents.


Using machine learning, MIT chemical engineers have created a computational model that can predict how well any given molecule will dissolve in an organic solvent — a key step in the synthesis of nearly any pharmaceutical. This type of prediction could make it much easier to develop new ways to produce drugs and other useful molecules.

The new model, which predicts how much of a solute will dissolve in a particular solvent, should help chemists to choose the right solvent for any given reaction in their synthesis, the researchers say. Common organic solvents include ethanol and acetone, and there are hundreds of others that can also be used in chemical reactions.

“Predicting solubility really is a rate-limiting step in synthetic planning and manufacturing of chemicals, especially drugs, so there’s been a longstanding interest in being able to make better predictions of solubility,” says Lucas Attia, an MIT graduate student and one of the lead authors of the new study.

The researchers have made their model freely available, and many companies and labs have already started using it. The model could be particularly useful for identifying solvents that are less hazardous than some of the most commonly used industrial solvents, the researchers say.

“There are some solvents which are known to dissolve most things. They’re really useful, but they’re damaging to the environment, and they’re damaging to people, so many companies require that you have to minimize the amount of those solvents that you use,” says Jackson Burns, an MIT graduate student who is also a lead author of the paper. “Our model is extremely useful in being able to identify the next-best solvent, which is hopefully much less damaging to the environment.”

William Green, the Hoyt Hottel Professor of Chemical Engineering and director of the MIT Energy Initiative, is the senior author of the study, which appears today in Nature Communications. Patrick Doyle, the Robert T. Haslam Professor of Chemical Engineering, is also an author of the paper.

Solving solubility

The new model grew out of a project that Attia and Burns worked on together in an MIT course on applying machine learning to chemical engineering problems. Traditionally, chemists have predicted solubility with a tool known as the Abraham Solvation Model, which can be used to estimate a molecule’s overall solubility by adding up the contributions of chemical structures within the molecule. While these predictions are useful, their accuracy is limited.

In the past few years, researchers have begun using machine learning to try to make more accurate solubility predictions. Before Burns and Attia began working on their new model, the state-of-the-art model for predicting solubility was a model developed in Green’s lab in 2022.

That model, known as SolProp, works by predicting a set of related properties and combining them, using thermodynamics, to ultimately predict the solubility. However, the model has difficulty predicting solubility for solutes that it hasn’t seen before.

“For drug and chemical discovery pipelines where you’re developing a new molecule, you want to be able to predict ahead of time what its solubility looks like,” Attia says.

Part of the reason that existing solubility models haven’t worked well is because there wasn’t a comprehensive dataset to train them on. However, in 2023 a new dataset called BigSolDB was released, which compiled data from nearly 800 published papers, including information on solubility for about 800 molecules dissolved about more than 100 organic solvents that are commonly used in synthetic chemistry.

Attia and Burns decided to try training two different types of models on this data. Both of these models represent the chemical structures of molecules using numerical representations known as embeddings, which incorporate information such as the number of atoms in a molecule and which atoms are bound to which other atoms. Models can then use these representations to predict a variety of chemical properties.

One of the models used in this study, known as FastProp and developed by Burns and others in Green’s lab, incorporates “static embeddings.” This means that the model already knows the embedding for each molecule before it starts doing any kind of analysis.

The other model, ChemProp, learns an embedding for each molecule during the training, at the same time that it learns to associate the features of the embedding with a trait such as solubility. This model, developed across multiple MIT labs, has already been used for tasks such as antibiotic discovery, lipid nanoparticle design, and predicting chemical reaction rates.

The researchers trained both types of models on over 40,000 data points from BigSolDB, including information on the effects of temperature, which plays a significant role in solubility. Then, they tested the models on about 1,000 solutes that had been withheld from the training data. They found that the models’ predictions were two to three times more accurate than those of SolProp, the previous best model, and the new models were especially accurate at predicting variations in solubility due to temperature.

“Being able to accurately reproduce those small variations in solubility due to temperature, even when the overarching experimental noise is very large, was a really positive sign that the network had correctly learned an underlying solubility prediction function,” Burns says.

Accurate predictions

The researchers had expected that the model based on ChemProp, which is able to learn new representations as it goes along, would be able to make more accurate predictions. However, to their surprise, they found that the two models performed essentially the same. That suggests that the main limitation on their performance is the quality of the data, and that the models are performing as well as theoretically possible based on the data that they’re using, the researchers say.

“ChemProp should always outperform any static embedding when you have sufficient data,” Burns says. “We were blown away to see that the static and learned embeddings were statistically indistinguishable in performance across all the different subsets, which indicates to us that that the data limitations that are present in this space dominated the model performance.”

The models could become more accurate, the researchers say, if better training and testing data were available — ideally, data obtained by one person or a group of people all trained to perform the experiments the same way.

“One of the big limitations of using these kinds of compiled datasets is that different labs use different methods and experimental conditions when they perform solubility tests. That contributes to this variability between different datasets,” Attia says.

Because the model based on FastProp makes its predictions faster and has code that is easier for other users to adapt, the researchers decided to make that one, known as FastSolv, available to the public. Multiple pharmaceutical companies have already begun using it.

“There are applications throughout the drug discovery pipeline,” Burns says. “We’re also excited to see, outside of formulation and drug discovery, where people may use this model.”

The research was funded, in part, by the U.S. Department of Energy.


Researchers glimpse the inner workings of protein language models

A new approach can reveal the features AI models use to predict proteins that might make good drug or vaccine targets.


Within the past few years, models that can predict the structure or function of proteins have been widely used for a variety of biological applications, such as identifying drug targets and designing new therapeutic antibodies.

These models, which are based on large language models (LLMs), can make very accurate predictions of a protein’s suitability for a given application. However, there’s no way to determine how these models make their predictions or which protein features play the most important role in those decisions.

In a new study, MIT researchers have used a novel technique to open up that “black box” and allow them to determine what features a protein language model takes into account when making predictions. Understanding what is happening inside that black box could help researchers to choose better models for a particular task, helping to streamline the process of identifying new drugs or vaccine targets.

“Our work has broad implications for enhanced explainability in downstream tasks that rely on these representations,” says Bonnie Berger, the Simons Professor of Mathematics, head of the Computation and Biology group in MIT’s Computer Science and Artificial Intelligence Laboratory, and the senior author of the study. “Additionally, identifying features that protein language models track has the potential to reveal novel biological insights from these representations.”

Onkar Gujral, an MIT graduate student, is the lead author of the open-access study, which appears this week in the Proceedings of the National Academy of Sciences. Mihir Bafna, an MIT graduate student in electrical engineering and computer science, and Eric Alm, an MIT professor of biological engineering, are also authors of the paper.

Opening the black box

In 2018, Berger and former MIT graduate student Tristan Bepler PhD ’20 introduced the first protein language model. Their model, like subsequent protein models that accelerated the development of AlphaFold, such as ESM2 and OmegaFold, was based on LLMs. These models, which include ChatGPT, can analyze huge amounts of text and figure out which words are most likely to appear together.

Protein language models use a similar approach, but instead of analyzing words, they analyze amino acid sequences. Researchers have used these models to predict the structure and function of proteins, and for applications such as identifying proteins that might bind to particular drugs.

In a 2021 study, Berger and colleagues used a protein language model to predict which sections of viral surface proteins are less likely to mutate in a way that enables viral escape. This allowed them to identify possible targets for vaccines against influenza, HIV, and SARS-CoV-2.

However, in all of these studies, it has been impossible to know how the models were making their predictions.

“We would get out some prediction at the end, but we had absolutely no idea what was happening in the individual components of this black box,” Berger says.

In the new study, the researchers wanted to dig into how protein language models make their predictions. Just like LLMs, protein language models encode information as representations that consist of a pattern of activation of different “nodes” within a neural network. These nodes are analogous to the networks of neurons that store memories and other information within the brain.

The inner workings of LLMs are not easy to interpret, but within the past couple of years, researchers have begun using a type of algorithm known as a sparse autoencoder to help shed some light on how those models make their predictions. The new study from Berger’s lab is the first to use this algorithm on protein language models.

Sparse autoencoders work by adjusting how a protein is represented within a neural network. Typically, a given protein will be represented by a pattern of activation of a constrained number of neurons, for example, 480. A sparse autoencoder will expand that representation into a much larger number of nodes, say 20,000.

When information about a protein is encoded by only 480 neurons, each node lights up for multiple features, making it very difficult to know what features each node is encoding. However, when the neural network is expanded to 20,000 nodes, this extra space along with a sparsity constraint gives the information room to “spread out.” Now, a feature of the protein that was previously encoded by multiple nodes can occupy a single node.

“In a sparse representation, the neurons lighting up are doing so in a more meaningful manner,” Gujral says. “Before the sparse representations are created, the networks pack information so tightly together that it's hard to interpret the neurons.”

Interpretable models

Once the researchers obtained sparse representations of many proteins, they used an AI assistant called Claude (related to the popular Anthropic chatbot of the same name), to analyze the representations. In this case, they asked Claude to compare the sparse representations with the known features of each protein, such as molecular function, protein family, or location within a cell.

By analyzing thousands of representations, Claude can determine which nodes correspond to specific protein features, then describe them in plain English. For example, the algorithm might say, “This neuron appears to be detecting proteins involved in transmembrane transport of ions or amino acids, particularly those located in the plasma membrane.”

This process makes the nodes far more “interpretable,” meaning the researchers can tell what each node is encoding. They found that the features most likely to be encoded by these nodes were protein family and certain functions, including several different metabolic and biosynthetic processes.

“When you train a sparse autoencoder, you aren’t training it to be interpretable, but it turns out that by incentivizing the representation to be really sparse, that ends up resulting in interpretability,” Gujral says.

Understanding what features a particular protein model is encoding could help researchers choose the right model for a particular task, or tweak the type of input they give the model, to generate the best results. Additionally, analyzing the features that a model encodes could one day help biologists to learn more about the proteins that they are studying.

“At some point when the models get a lot more powerful, you could learn more biology than you already know, from opening up the models,” Gujral says.

The research was funded by the National Institutes of Health. 


A shape-changing antenna for more versatile sensing and communication

You can adjust the frequency range of this durable, inexpensive antenna by squeezing or stretching its structure.


MIT researchers have developed a reconfigurable antenna that dynamically adjusts its frequency range by changing its physical shape, making it more versatile for communications and sensing than static antennas.

A user can stretch, bend, or compress the antenna to make reversible changes to its radiation properties, enabling a device to operate in a wider frequency range without the need for complex, moving parts. With an adjustable frequency range, a reconfigurable antenna could adapt to changing environmental conditions and reduce the need for multiple antennas.

The word “antenna” may draw to mind metal rods like the “bunny ears” on top of old television sets, but the MIT team instead worked with metamaterials — engineered materials whose mechanical properties, such as stiffness and strength, depend on the geometric arrangement of the material’s components.

The result is a simplified design for a reconfigurable antenna that could be used for applications like energy transfer in wearable devices, motion tracking and sensing for augmented reality, or wireless communication across a wide range of network protocols.

In addition, the researchers developed an editing tool so users can generate customized metamaterial antennas, which can be fabricated using a laser cutter.

“Usually, when we think of antennas, we think of static antennas — they are fabricated to have specific properties and that is it. However, by using auxetic metamaterials, which can deform into three different geometric states, we can seamlessly change the properties of the antenna by changing its geometry, without fabricating a new structure. In addition, we can use changes in the antenna’s radio frequency properties, due to changes in the metamaterial geometry, as a new method of sensing for interaction design,” says lead author Marwa AlAlawi, a mechanical engineering graduate student at MIT.

Her co-authors include Regina Zheng and Katherine Yan, both MIT undergraduate students; Ticha Sethapakdi, an MIT graduate student in electrical engineering and computer science; Soo Yeon Ahn of the Gwangju Institute of Science and Technology in Korea; and co-senior authors Junyi Zhu, assistant professor at the University of Michigan; and Stefanie Mueller, the TIBCO Career Development Associate Professor in MIT’s departments of Electrical Engineering and Computer Science and Mechanical Engineering and leader of the Human-Computer Interaction Group at the Computer Science and Artificial Intelligence Lab. The research will be presented at the ACM Symposium on User Interface Software and Technology.

Making sense of antennas

While traditional antennas radiate and receive radio signals, in this work, the researchers looked at how the devices can act as sensors. The team’s goal was to develop a mechanical element that can also be used as an antenna for sensing.

To do this, they leveraged the antenna’s “resonance frequency,” which is the frequency at which the antenna is most efficient.

An antenna’s resonance frequency will shift due to changes in its shape. (Think about extending the left “bunny ear” to reduce TV static.) Researchers can capture these shifts for sensing. For instance, a reconfigurable antenna could be used in this way to detect the expansion of a person’s chest, to monitor their respiration.

To design a versatile reconfigurable antenna, the researchers used metamaterials. These engineered materials, which can be programmed to adopt different shapes, are composed of a periodic arrangement of unit cells that can be rotated, compressed, stretched, or bent.

By deforming the metamaterial structure, one can shift the antenna’s resonance frequency.

“In order to trigger changes in resonance frequency, we either need to change the antenna’s effective length or introduce slits and holes into it. Metamaterials allow us to get those different states from only one structure,” AlAlawi says.

The device, dubbed the meta-antenna, is composed of a dielectric layer of material sandwiched between two conductive layers.

To fabricate a meta-antenna, the researchers cut the dielectric laser out of a rubber sheet with a laser cutter. Then they added a patch on top of the dielectric layer using conductive spray paint, creating a resonating “patch antenna.”

But they found that even the most flexible conductive material couldn’t withstand the amount of deformation the antenna would experience.

“We did a lot of trial and error to determine that, if we coat the structure with flexible acrylic paint, it protects the hinges so they don’t break prematurely,” AlAlawi explains.

A means for makers

With the fabrication problem solved, the researchers built a tool that enables users to design and produce metamaterial antennas for specific applications.

The user can define the size of the antenna patch, choose a thickness for the dielectric layer, and set the length to width ratio of the metamaterial unit cells. Then the system automatically simulates the antenna’s resonance frequency range.

“The beauty of metamaterials is that, because it is an interconnected system of linkages, the geometric structure allows us to reduce the complexity of a mechanical system,” AlAlawi says.

Using the design tool, the researchers incorporated meta-antennas into several smart devices, including a curtain that dynamically adjusts household lighting and headphones that seamlessly transition between noise-cancelling and transparent modes.

For the smart headphone, for instance, when the meta-antenna expands and bends, it shifts the resonance frequency by 2.6 percent, which switches the headphone mode. The team’s experiments also showed that meta-antenna structures are durable enough to withstand more than 10,000 compressions.

Because the antenna patch can be patterned onto any surface, it could be used with more complex structures. For instance, the antenna could be incorporated into smart textiles that perform noninvasive biomedical sensing or temperature monitoring.

In the future, the researchers want to design three-dimensional meta-antennas for a wider range of applications. They also want to add more functions to the design tool, improve the durability and flexibility of the metamaterial structure, experiment with different symmetric metamaterial patterns, and streamline some manual fabrication steps.

This research was funded, in part, by the Bahrain Crown Prince International Scholarship and the Gwangju Institute of Science and Technology.


How AI could speed the development of RNA vaccines and other RNA therapies

MIT engineers used a machine-learning model to design nanoparticles that can deliver RNA to cells more efficiently.


Using artificial intelligence, MIT researchers have come up with a new way to design nanoparticles that can more efficiently deliver RNA vaccines and other types of RNA therapies.

After training a machine-learning model to analyze thousands of existing delivery particles, the researchers used it to predict new materials that would work even better. The model also enabled the researchers to identify particles that would work well in different types of cells, and to discover ways to incorporate new types of materials into the particles.

“What we did was apply machine-learning tools to help accelerate the identification of optimal ingredient mixtures in lipid nanoparticles to help target a different cell type or help incorporate different materials, much faster than previously was possible,” says Giovanni Traverso, an associate professor of mechanical engineering at MIT, a gastroenterologist at Brigham and Women’s Hospital, and the senior author of the study.

This approach could dramatically speed the process of developing new RNA vaccines, as well as therapies that could be used to treat obesity, diabetes, and other metabolic disorders, the researchers say.

Alvin Chan, a former MIT postdoc who is now an assistant professor at Nanyang Technological University, and Ameya Kirtane, a former MIT postdoc who is now an assistant professor at the University of Minnesota, are the lead authors of the new open-access study, which appears today in Nature Nanotechnology.

Particle predictions

RNA vaccines, such as the vaccines for SARS-CoV-2, are usually packaged in lipid nanoparticles (LNPs) for delivery. These particles protect mRNA from being broken down in the body and help it to enter cells once injected.

Creating particles that handle these jobs more efficiently could help researchers to develop even more effective vaccines. Better delivery vehicles could also make it easier to develop mRNA therapies that encode genes for proteins that could help to treat a variety of diseases.

In 2024, Traverso’s lab launched a multiyear research program, funded by the U.S. Advanced Research Projects Agency for Health (ARPA-H), to develop new ingestible devices that could achieve oral delivery of RNA treatments and vaccines.

“Part of what we’re trying to do is develop ways of producing more protein, for example, for therapeutic applications. Maximizing the efficiency is important to be able to boost how much we can have the cells produce,” Traverso says.

A typical LNP consists of four components — a cholesterol, a helper lipid, an ionizable lipid, and a lipid that is attached to polyethylene glycol (PEG). Different variants of each of these components can be swapped in to create a huge number of possible combinations. Changing up these formulations and testing each one individually is very time-consuming, so Traverso, Chan, and their colleagues decided to turn to artificial intelligence to help speed up the process.

“Most AI models in drug discovery focus on optimizing a single compound at a time, but that approach doesn’t work for lipid nanoparticles, which are made of multiple interacting components,” Chan says. “To tackle this, we developed a new model called COMET, inspired by the same transformer architecture that powers large language models like ChatGPT. Just as those models understand how words combine to form meaning, COMET learns how different chemical components come together in a nanoparticle to influence its properties — like how well it can deliver RNA into cells.”

To generate training data for their machine-learning model, the researchers created a library of about 3,000 different LNP formulations. The team tested each of these 3,000 particles in the lab to see how efficiently they could deliver their payload to cells, then fed all of this data into a machine-learning model.

After the model was trained, the researchers asked it to predict new formulations that would work better than existing LNPs. They tested those predictions by using the new formulations to deliver mRNA encoding a fluorescent protein to mouse skin cells grown in a lab dish. They found that the LNPs predicted by the model did indeed work better than the particles in the training data, and in some cases better than LNP formulations that are used commercially.

Accelerated development

Once the researchers showed that the model could accurately predict particles that would efficiently deliver mRNA, they began asking additional questions. First, they wondered if they could train the model on nanoparticles that incorporate a fifth component: a type of polymer known as branched poly beta amino esters (PBAEs).

Research by Traverso and his colleagues has shown that these polymers can effectively deliver nucleic acids on their own, so they wanted to explore whether adding them to LNPs could improve LNP performance. The MIT team created a set of about 300 LNPs that also include these polymers, which they used to train the model. The resulting model could then predict additional formulations with PBAEs that would work better.

Next, the researchers set out to train the model to make predictions about LNPs that would work best in different types of cells, including a type of cell called Caco-2, which is derived from colorectal cancer cells. Again, the model was able to predict LNPs that would efficiently deliver mRNA to these cells.

Lastly, the researchers used the model to predict which LNPs could best withstand lyophilization — a freeze-drying process often used to extend the shelf-life of medicines.

“This is a tool that allows us to adapt it to a whole different set of questions and help accelerate development. We did a large training set that went into the model, but then you can do much more focused experiments and get outputs that are helpful on very different kinds of questions,” Traverso says.

He and his colleagues are now working on incorporating some of these particles into potential treatments for diabetes and obesity, which are two of the primary targets of the ARPA-H funded project. Therapeutics that could be delivered using this approach include GLP-1 mimics with similar effects to Ozempic.

This research was funded by the GO Nano Marble Center at the Koch Institute, the Karl van Tassel Career Development Professorship, the MIT Department of Mechanical Engineering, Brigham and Women’s Hospital, and ARPA-H.


Study sheds light on graphite’s lifespan in nuclear reactors

Scientists have discovered a link between the material’s pore size distribution and its ability to withstand radiation.


Graphite is a key structural component in some of the world’s oldest nuclear reactors and many of the next-generation designs being built today. But it also condenses and swells in response to radiation — and the mechanism behind those changes has proven difficult to study.

Now, MIT researchers and collaborators have uncovered a link between properties of graphite and how the material behaves in response to radiation. The findings could lead to more accurate, less destructive ways of predicting the lifespan of graphite materials used in reactors around the world.

“We did some basic science to understand what leads to swelling and, eventually, failure in graphite structures,” says MIT Research Scientist Boris Khaykovich, senior author of the new study. “More research will be needed to put this into practice, but the paper proposes an attractive idea for industry: that you might not need to break hundreds of irradiated samples to understand their failure point.”

Specifically, the study shows a connection between the size of the pores within graphite and the way the material swells and shrinks in volume, leading to degradation.

“The lifetime of nuclear graphite is limited by irradiation-induced swelling,” says co-author and MIT Research Scientist Lance Snead. “Porosity is a controlling factor in this swelling, and while graphite has been extensively studied for nuclear applications since the Manhattan Project, we still do not have a clear understanding of the porosity in both mechanical properties and swelling. This work addresses that.”

The open-access paper appears this week in Interdisciplinary Materials. It is co-authored by Khaykovich, Snead, MIT Research Scientist Sean Fayfar, former MIT research fellow Durgesh Rai, Stony Brook University Assistant Professor David Sprouster, Oak Ridge National Laboratory Staff Scientist Anne Campbell, and Argonne National Laboratory Physicist Jan Ilavsky.

A long-studied, complex material

Ever since 1942, when physicists and engineers built the world’s first nuclear reactor on a converted squash court at the University of Chicago, graphite has played a central role in the generation of nuclear energy. That first reactor, dubbed the Chicago Pile, was constructed from about 40,000 graphite blocks, many of which contained nuggets of uranium.

Today graphite is a vital component of many operating nuclear reactors and is expected to play a central role in next-generation reactor designs like molten-salt and high-temperature gas reactors. That’s because graphite is a good neutron moderator, slowing down the neutrons released by nuclear fission so they are more likely to create fissions themselves and sustain a chain reaction.

“The simplicity of graphite makes it valuable,” Khaykovich explains. “It’s made of carbon, and it’s relatively well-known how to make it cleanly. Graphite is a very mature technology. It’s simple, stable, and we know it works.”

But graphite also has its complexities.

“We call graphite a composite even though it’s made up of only carbon atoms,” Khaykovich says. “It includes ‘filler particles’ that are more crystalline, then there is a matrix called a ‘binder’ that is less crystalline, then there are pores that span in length from nanometers to many microns.”

Each graphite grade has its own composite structure, but they all contain fractals, or shapes that look the same at different scales.

Those complexities have made it hard to predict how graphite will respond to radiation in microscopic detail, although it’s been known for decades that when graphite is irradiated, it first densifies, reducing its volume by up to 10 percent, before swelling and cracking. The volume fluctuation is caused by changes to graphite’s porosity and lattice stress.

“Graphite deteriorates under radiation, as any material does,” Khaykovich says. “So, on the one hand we have a material that’s extremely well-known, and on the other hand, we have a material that is immensely complicated, with a behavior that’s impossible to predict through computer simulations.”

For the study, the researchers received irradiated graphite samples from Oak Ridge National Laboratory. Co-authors Campbell and Snead were involved in irradiating the samples some 20 years ago. The samples are a grade of graphite known as G347A.

The research team used an analysis technique known as X-ray scattering, which uses the scattered intensity of an X-ray beam to analyze the properties of material. Specifically, they looked at the distribution of sizes and surface areas of the sample’s pores, or what are known as the material’s fractal dimensions.

“When you look at the scattering intensity, you see a large range of porosity,” Fayfar says. “Graphite has porosity over such large scales, and you have this fractal self-similarity: The pores in very small sizes look similar to pores spanning microns, so we used fractal models to relate different morphologies across length scales.”

Fractal models had been used on graphite samples before, but not on irradiated samples to see how the material’s pore structures changed. The researchers found that when graphite is first exposed to radiation, its pores get filled as the material degrades.

“But what was quite surprising to us is the [size distribution of the pores] turned back around,” Fayfar says. “We had this recovery process that matched our overall volume plots, which was quite odd. It seems like after graphite is irradiated for so long, it starts recovering. It’s sort of an annealing process where you create some new pores, then the pores smooth out and get slightly bigger. That was a big surprise.”

The researchers found that the size distribution of the pores closely follows the volume change caused by radiation damage.

“Finding a strong correlation between the [size distribution of pores] and the graphite’s volume changes is a new finding, and it helps connect to the failure of the material under irradiation,” Khaykovich says. “It’s important for people to know how graphite parts will fail when they are under stress and how failure probability changes under irradiation.”

From research to reactors

The researchers plan to study other graphite grades and explore further how pore sizes in irradiated graphite correlate with the probability of failure. They speculate that a statistical technique known as the Weibull Distribution could be used to predict graphite’s time until failure. The Weibull Distribution is already used to describe the probability of failure in ceramics and other porous materials like metal alloys.

Khaykovich also speculated that the findings could contribute to our understanding of why materials densify and swell under irradiation.

“There’s no quantitative model of densification that takes into account what’s happening at these tiny scales in graphite,” Khaykovich says. “Graphite irradiation densification reminds me of sand or sugar, where when you crush big pieces into smaller grains, they densify. For nuclear graphite, the crushing force is the energy that neutrons bring in, causing large pores to get filled with smaller, crushed pieces. But more energy and agitation create still more pores, and so graphite swells again. It’s not a perfect analogy, but I believe analogies bring progress for understanding these materials.”

The researchers describe the paper as an important step toward informing graphite production and use in nuclear reactors of the future.

“Graphite has been studied for a very long time, and we’ve developed a lot of strong intuitions about how it will respond in different environments, but when you’re building a nuclear reactor, details matter,” Khaykovich says. “People want numbers. They need to know how much thermal conductivity will change, how much cracking and volume change will happen. If components are changing volume, at some point you need to take that into account.”

This work was supported, in part, by the U.S. Department of Energy.


Using generative AI, researchers design compounds that can kill drug-resistant bacteria

The team used two different AI approaches to design novel antibiotics, including one that showed promise against MRSA.


With help from artificial intelligence, MIT researchers have designed novel antibiotics that can combat two hard-to-treat infections: drug-resistant Neisseria gonorrhoeae and multi-drug-resistant Staphylococcus aureus (MRSA).

Using generative AI algorithms, the research team designed more than 36 million possible compounds and computationally screened them for antimicrobial properties. The top candidates they discovered are structurally distinct from any existing antibiotics, and they appear to work by novel mechanisms that disrupt bacterial cell membranes.

This approach allowed the researchers to generate and evaluate theoretical compounds that have never been seen before — a strategy that they now hope to apply to identify and design compounds with activity against other species of bacteria.

“We’re excited about the new possibilities that this project opens up for antibiotics development. Our work shows the power of AI from a drug design standpoint, and enables us to exploit much larger chemical spaces that were previously inaccessible,” says James Collins, the Termeer Professor of Medical Engineering and Science in MIT’s Institute for Medical Engineering and Science (IMES) and Department of Biological Engineering, and a member of the Broad Institute.

Collins is the senior author of the study, which appears today in Cell. The paper’s lead authors are MIT postdoc Aarti Krishnan, former postdoc Melis Anahtar ’08, and Jacqueline Valeri PhD ’23.

Exploring chemical space

Over the past 45 years, a few dozen new antibiotics have been approved by the FDA, but most of these are variants of existing antibiotics. At the same time, bacterial resistance to many of these drugs has been growing. Globally, it is estimated that drug-resistant bacterial infections cause nearly 5 million deaths per year.

In hopes of finding new antibiotics to fight this growing problem, Collins and others at MIT’s Antibiotics-AI Project have harnessed the power of AI to screen huge libraries of existing chemical compounds. This work has yielded several promising drug candidates, including halicin and abaucin.

To build on that progress, Collins and his colleagues decided to expand their search into molecules that can’t be found in any chemical libraries. By using AI to generate hypothetically possible molecules that don’t exist or haven’t been discovered, they realized that it should be possible to explore a much greater diversity of potential drug compounds.

In their new study, the researchers employed two different approaches: First, they directed generative AI algorithms to design molecules based on a specific chemical fragment that showed antimicrobial activity, and second, they let the algorithms freely generate molecules, without having to include a specific fragment.

For the fragment-based approach, the researchers sought to identify molecules that could kill N. gonorrhoeae, a Gram-negative bacterium that causes gonorrhea. They began by assembling a library of about 45 million known chemical fragments, consisting of all possible combinations of 11 atoms of carbon, nitrogen, oxygen, fluorine, chlorine, and sulfur, along with fragments from Enamine’s REadily AccessibLe (REAL) space.

Then, they screened the library using machine-learning models that Collins’ lab has previously trained to predict antibacterial activity against N. gonorrhoeae. This resulted in nearly 4 million fragments. They narrowed down that pool by removing any fragments predicted to be cytotoxic to human cells, displayed chemical liabilities, and were known to be similar to existing antibiotics. This left them with about 1 million candidates.

“We wanted to get rid of anything that would look like an existing antibiotic, to help address the antimicrobial resistance crisis in a fundamentally different way. By venturing into underexplored areas of chemical space, our goal was to uncover novel mechanisms of action,” Krishnan says.

Through several rounds of additional experiments and computational analysis, the researchers identified a fragment they called F1 that appeared to have promising activity against N. gonorrhoeae. They used this fragment as the basis for generating additional compounds, using two different generative AI algorithms.

One of those algorithms, known as chemically reasonable mutations (CReM), works by starting with a particular molecule containing F1 and then generating new molecules by adding, replacing, or deleting atoms and chemical groups. The second algorithm, F-VAE (fragment-based variational autoencoder), takes a chemical fragment and builds it into a complete molecule. It does so by learning patterns of how fragments are commonly modified, based on its pretraining on more than 1 million molecules from the ChEMBL database.

Those two algorithms generated about 7 million candidates containing F1, which the researchers then computationally screened for activity against N. gonorrhoeae. This screen yielded about 1,000 compounds, and the researchers selected 80 of those to see if they could be produced by chemical synthesis vendors. Only two of these could be synthesized, and one of them, named NG1, was very effective at killing N. gonorrhoeae in a lab dish and in a mouse model of drug-resistant gonorrhea infection.

Additional experiments revealed that NG1 interacts with a protein called LptA, a novel drug target involved in the synthesis of the bacterial outer membrane. It appears that the drug works by interfering with membrane synthesis, which is fatal to cells.

Unconstrained design

In a second round of studies, the researchers explored the potential of using generative AI to freely design molecules, using Gram-positive bacteria, S. aureus as their target.

Again, the researchers used CReM and VAE to generate molecules, but this time with no constraints other than the general rules of how atoms can join to form chemically plausible molecules. Together, the models generated more than 29 million compounds. The researchers then applied the same filters that they did to the N. gonorrhoeae candidates, but focusing on S. aureus, eventually narrowing the pool down to about 90 compounds.

They were able to synthesize and test 22 of these molecules, and six of them showed strong antibacterial activity against multi-drug-resistant S. aureus grown in a lab dish. They also found that the top candidate, named DN1, was able to clear a methicillin-resistant S. aureus (MRSA) skin infection in a mouse model. These molecules also appear to interfere with bacterial cell membranes, but with broader effects not limited to interaction with one specific protein.

Phare Bio, a nonprofit that is also part of the Antibiotics-AI Project, is now working on further modifying NG1 and DN1 to make them suitable for additional testing.

“In a collaboration with Phare Bio, we are exploring analogs, as well as working on advancing the best candidates preclinically, through medicinal chemistry work,” Collins says. “We are also excited about applying the platforms that Aarti and the team have developed toward other bacterial pathogens of interest, notably Mycobacterium tuberculosis and Pseudomonas aeruginosa.”

The research was funded, in part, by the U.S. Defense Threat Reduction Agency, the National Institutes of Health, the Audacious Project, Flu Lab, the Sea Grape Foundation, Rosamund Zander and Hansjorg Wyss for the Wyss Foundation, and an anonymous donor.


A new way to test how well AI systems classify text

As large language models increasingly dominate our everyday lives, new systems for checking their reliability are more important than ever.


Is this movie review a rave or a pan? Is this news story about business or technology? Is this online chatbot conversation veering off into giving financial advice? Is this online medical information site giving out misinformation?

These kinds of automated conversations, whether they involve seeking a movie or restaurant review or getting information about your bank account or health records, are becoming increasingly prevalent. More than ever, such evaluations are being made by highly sophisticated algorithms, known as text classifiers, rather than by human beings. But how can we tell how accurate these classifications really are?

Now, a team at MIT’s Laboratory for Information and Decision Systems (LIDS) has come up with an innovative approach to not only measure how well these classifiers are doing their job, but then go one step further and show how to make them more accurate.

The new evaluation and remediation software was led and developed by Lei Xu alongside the research conducted by Sarah Alnegheimish, Kalyan Veeramachaneni, a principal research scientist at LIDS and senior author, with two others. The software package is being made freely available for download by anyone who wants to use it.

A standard method for testing these classification systems is to create what are known as synthetic examples — sentences that closely resemble ones that have already been classified. For example, researchers might take a sentence that has already been tagged by a classifier program as being a rave review, and see if changing a word or a few words while retaining the same meaning could fool the classifier into deeming it a pan. Or a sentence that was determined to be misinformation might get misclassified as accurate. This ability to fool the classifiers makes these adversarial examples.

People have tried various ways to find the vulnerabilities in these classifiers, Veeramachaneni says. But existing methods of finding these vulnerabilities have a hard time with this task and miss many examples that they should catch, he says.

Increasingly, companies are trying to use such evaluation tools in real time, monitoring the output of chatbots used for various purposes to try to make sure they are not putting out improper responses. For example, a bank might use a chatbot to respond to routine customer queries such as checking account balances or applying for a credit card, but it wants to ensure that its responses could never be interpreted as financial advice, which could expose the company to liability. “Before showing the chatbot’s response to the end user, they want to use the text classifier to detect whether it’s giving financial advice or not,” Veeramachaneni says. But then it’s important to test that classifier to see how reliable its evaluations are.

“These chatbots, or summarization engines or whatnot are being set up across the board,” he says, to deal with external customers and within an organization as well, for example providing information about HR issues. It’s important to put these text classifiers into the loop to detect things that they are not supposed to say, and filter those out before the output gets transmitted to the user.

That’s where the use of adversarial examples comes in — those sentences that have already been classified but then produce a different response when they are slightly modified while retaining the same meaning. How can people confirm that the meaning is the same? By using another large language model (LLM) that interprets and compares meanings. So, if the LLM says the two sentences mean the same thing, but the classifier labels them differently, “that is a sentence that is adversarial — it can fool the classifier,” Veeramachaneni says. And when the researchers examined these adversarial sentences, “we found that most of the time, this was just a one-word change,” although the people using LLMs to generate these alternate sentences often didn’t realize that.

Further investigation, using LLMs to analyze many thousands of examples, showed that certain specific words had an outsized influence in changing the classifications, and therefore the testing of a classifier’s accuracy could focus on this small subset of words that seem to make the most difference. They found that one-tenth of 1 percent of all the 30,000 words in the system’s vocabulary could account for almost half of all these reversals of classification, in some specific applications.

Lei Xu PhD ’23, a recent graduate from LIDS who performed much of the analysis as part of his thesis work, “used a lot of interesting estimation techniques to figure out what are the most powerful words that can change the overall classification, that can fool the classifier,” Veeramachaneni says. The goal is to make it possible to do much more narrowly targeted searches, rather than combing through all possible word substitutions, thus making the computational task of generating adversarial examples much more manageable. “He’s using large language models, interestingly enough, as a way to understand the power of a single word.”

Then, also using LLMs, he searches for other words that are closely related to these powerful words, and so on, allowing for an overall ranking of words according to their influence on the outcomes. Once these adversarial sentences have been found, they can be used in turn to retrain the classifier to take them into account, increasing the robustness of the classifier against those mistakes.

Making classifiers more accurate may not sound like a big deal if it’s just a matter of classifying news articles into categories, or deciding whether reviews of anything from movies to restaurants are positive or negative. But increasingly, classifiers are being used in settings where the outcomes really do matter, whether preventing the inadvertent release of sensitive medical, financial, or security information, or helping to guide important research, such as into properties of chemical compounds or the folding of proteins for biomedical applications, or in identifying and blocking hate speech or known misinformation.

As a result of this research, the team introduced a new metric, which they call p, which provides a measure of how robust a given classifier is against single-word attacks. And because of the importance of such misclassifications, the research team has made its products available as open access for anyone to use. The package consists of two components: SP-Attack, which generates adversarial sentences to test classifiers in any particular application, and SP-Defense, which aims to improve the robustness of the classifier by generating and using adversarial sentences to retrain the model.

In some tests, where competing methods of testing classifier outputs allowed a 66 percent success rate by adversarial attacks, this team’s system cut that attack success rate almost in half, to 33.7 percent. In other applications, the improvement was as little as a 2 percent difference, but even that can be quite important, Veeramachaneni says, since these systems are being used for so many billions of interactions that even a small percentage can affect millions of transactions.

The team’s results were published on July 7 in the journal Expert Systems in a paper by Xu, Veeramachaneni, and Alnegheimish of LIDS, along with Laure Berti-Equille at IRD in Marseille, France, and Alfredo Cuesta-Infante at the Universidad Rey Juan Carlos, in Spain. 


MIT gears up to transform manufacturing

The Initiative for New Manufacturing is convening experts across the Institute to drive a transformation of production across the U.S. and the world.


“Manufacturing is the engine of society, and it is the backbone of robust, resilient economies,” says John Hart, head of MIT’s Department of Mechanical Engineering (MechE) and faculty co-director of the MIT Initiative for New Manufacturing (INM). “With manufacturing a lively topic in today’s news, there’s a renewed appreciation and understanding of the importance of manufacturing to innovation, to economic and national security, and to daily lives.”

Launched this May, INM will “help create a transformation of manufacturing through new technology, through development of talent, and through an understanding of how to scale manufacturing in a way that enables imparts higher productivity and resilience, drives adoption of new technologies, and creates good jobs,” Hart says.

INM is one of MIT’s strategic initiatives and builds on the successful three-year-old Manufacturing@MIT program. “It’s a recognition by MIT that manufacturing is an Institute-wide theme and an Institute-wide priority, and that manufacturing connects faculty and students across campus,” says Hart. Alongside Hart, INM’s faculty co-directors are Institute Professor Suzanne Berger and Chris Love, professor of chemical engineering.

The initiative is pursuing four main themes: reimagining manufacturing technologies and systems, elevating the productivity and human experience of manufacturing, scaling up new manufacturing, and transforming the manufacturing base.

Breaking manufacturing barriers for corporations

Amgen, Autodesk, Flex, GE Vernova, PTC, Sanofi, and Siemens are founding members of INM’s industry consortium. These industry partners will work closely with MIT faculty, researchers, and students across many aspects of manufacturing-related research, both in broad-scale initiatives and in particular areas of shared interests. Membership requires a minimum three-year commitment of $500,000 a year to manufacturing-related activities at MIT, including the INM membership fee of $275,000 per year, which supports several core activities that engage the industry members.

One major thrust for INM industry collaboration is the deployment and adoption of AI and automation in manufacturing. This effort will include seed research projects at MIT, collaborative case studies, and shared strategy development.

INM also offers companies participation in the MIT-wide New Manufacturing Research effort, which is studying the trajectories of specific manufacturing industries and examining cross-cutting themes such as technology and financing.

Additionally, INM will concentrate on education for all professions in manufacturing, with alliances bringing together corporations, community colleges, government agencies, and other partners. “We'll scale our curriculum to broader audiences, from aspiring manufacturing workers and aspiring production line supervisors all the way up to engineers and executives,” says Hart.

In workforce training, INM will collaborate with companies broadly to help understand the challenges and frame its overall workforce agenda, and with individual firms on specific challenges, such as acquiring suitably prepared employees for a new factory.

Importantly, industry partners will also engage directly with students. Founding member Flex, for instance, hosted MIT researchers and students at the Flex Institute of Technology in Sorocaba, Brazil, developing new solutions for electronics manufacturing.

“History shows that you need to innovate in manufacturing alongside the innovation in products,” Hart comments. “At MIT, as more students take classes in manufacturing, they’ll think more about key manufacturing issues as they decide what research problems they want to solve, or what choices they make as they prototype their devices. The same is true for industry — companies that operate at the frontier of manufacturing, whether through internal capabilities or their supply chains, are positioned to be on the frontier of product innovation and overall growth.”

“We’ll have an opportunity to bring manufacturing upstream to the early stage of research, designing new processes and new devices with scalability in mind,” he says.

Additionally, MIT expects to open new manufacturing-related labs and to further broaden cooperation with industry at existing shared facilities, such as MIT.nano. Hart says that facilities will also invite tighter collaborations with corporations — not just providing advanced equipment, but working jointly on, say, new technologies for weaving textiles, or speeding up battery manufacturing.

Homing in on the United States

INM is a global project that brings a particular focus on the United States, which remains the world’s second-largest manufacturing economy, but has suffered a significant decline in manufacturing employment and innovation.

One key to reversing this trend and reinvigorating the U.S. manufacturing base is advocacy for manufacturing’s critical role in society and the career opportunities it offers.

“No one really disputes the importance of manufacturing,” Hart says. “But we need to elevate interest in manufacturing as a rewarding career, from the production workers to manufacturing engineers and leaders, through advocacy, education programs, and buy-in from industry, government, and academia.”

MIT is in a unique position to convene industry, academic, and government stakeholders in manufacturing to work together on this vital issue, he points out.

Moreover, in times of radical and rapid changes in manufacturing, “we need to focus on deploying new technologies into factories and supply chains,” Hart says. “Technology is not all of the solution, but for the U.S. to expand our manufacturing base, we need to do it with technology as a key enabler, embracing companies of all sizes, including small and medium enterprises.”

“As AI becomes more capable, and automation becomes more flexible and more available, these are key building blocks upon which you can address manufacturing challenges,” he says. “AI and automation offer new accelerated ways to develop, deploy, and monitor production processes, which present a huge opportunity and, in some cases, a necessity.”

“While manufacturing is always a combination of old technology, new technology, established practice, and new ways of thinking, digital technology gives manufacturers an opportunity to leapfrog competitors,” Hart says. “That’s very, very powerful for the U.S. and any company, or country, that aims to create differentiated capabilities.”

Fortunately, in recent years, investors have increasingly bought into new manufacturing in the United States. “They see the opportunity to re-industrialize, to build the factories and production systems of the future,” Hart says.

“That said, building new manufacturing is capital-intensive, and takes time,” he adds. “So that’s another area where it’s important to convene stakeholders and to think about how startups and growth-stage companies build their capital portfolios, how large industry can support an ecosystem of small businesses and young companies, and how to develop talent to support those growing companies.”

All these concerns and opportunities in the manufacturing ecosystem play to MIT’s strengths. “MIT’s DNA of cross-disciplinary collaboration and working with industry can let us create a lot of impact,” Hart emphasizes. “We can understand the practical challenges. We can also explore breakthrough ideas in research and cultivate successful outcomes, all the way to new companies and partnerships. Sometimes those are seen as disparate approaches, but we like to bring them together.”


Would you like that coffee with iron?

New microparticles containing iron or iodine could be used to fortify food and beverages, to help fight malnutrition.


Around the world, about 2 billion people suffer from iron deficiency, which can lead to anemia, impaired brain development in children, and increased infant mortality.

To combat that problem, MIT researchers have come up with a new way to fortify foods and beverages with iron, using small crystalline particles. These particles, known as metal-organic frameworks, could be sprinkled on food, added to staple foods such as bread, or incorporated into drinks like coffee and tea.

“We’re creating a solution that can be seamlessly added to staple foods across different regions,” says Ana Jaklenec, a principal investigator at MIT’s Koch Institute for Integrative Cancer Research. “What’s considered a staple in Senegal isn’t the same as in India or the U.S., so our goal was to develop something that doesn’t react with the food itself. That way, we don’t have to reformulate for every context — it can be incorporated into a wide range of foods and beverages without compromise.”

The particles designed in this study can also carry iodine, another critical nutrient. The particles could also be adapted to carry important minerals such as zinc, calcium, or magnesium.

“We are very excited about this new approach and what we believe is a novel application of metal-organic frameworks to potentially advance nutrition, particularly in the developing world,” says Robert Langer, the David H. Koch Institute Professor at MIT and a member of the Koch Institute.

Jaklenec and Langer are the senior authors of the study, which appears today in the journal Matter. MIT postdoc Xin Yang and Linzixuan (Rhoda) Zhang PhD ’24 are the lead authors of the paper.

Iron stabilization

Food fortification can be a successful way to combat nutrient deficiencies, but this approach is often challenging because many nutrients are fragile and break down during storage or cooking. When iron is added to foods, it can react with other molecules in the food, giving the food a metallic taste.

In previous work, Jaklenec’s lab has shown that encapsulating nutrients in polymers can protect them from breaking down or reacting with other molecules. In a small clinical trial, the researchers found that women who ate bread fortified with encapsulated iron were able to absorb the iron from the food.

However, one drawback to this approach is that the polymer adds a lot of bulk to the material, limiting the amount of iron or other nutrients that end up in the food.

“Encapsulating iron in polymers significantly improves its stability and reactivity, making it easier to add to food,” Jaklenec says. “But to be effective, it requires a substantial amount of polymer. That limits how much iron you can deliver in a typical serving, making it difficult to meet daily nutritional targets through fortified foods alone.”

To overcome that challenge, Yang came up with a new idea: Instead of encapsulating iron in a polymer, they could use iron itself as a building block for a crystalline particle known as a metal-organic framework, or MOF (pronounced “moff”).

MOFs consist of metal atoms joined by organic molecules called ligands to create a rigid, cage-like structure. Depending on the combination of metals and ligands chosen, they can be used for a wide variety of applications.

“We thought maybe we could synthesize a metal-organic framework with food-grade ligands and food-grade micronutrients,” Yang says. “Metal-organic frameworks have very high porosity, so they can load a lot of cargo. That’s why we thought we could leverage this platform to make a new metal-organic framework that could be used in the food industry.”

In this case, the researchers designed a MOF consisting of iron bound to a ligand called fumaric acid, which is often used as a food additive to enhance flavor or help preserve food.

This structure prevents iron from reacting with polyphenols — compounds commonly found in foods such as whole grains and nuts, as well as coffee and tea. When iron does react with those compounds, it forms a metal polyphenol complex that cannot be absorbed by the body.

The MOFs’ structure also allows them to remain stable until they reach an acidic environment, such as the stomach, where they break down and release their iron payload.

Double-fortified salts

The researchers also decided to include iodine in their MOF particle, which they call NuMOF. Iodized salt has been very successful at preventing iodine deficiency, and many efforts are now underway to create “double-fortified salts” that would also contain iron.

Delivering these nutrients together has proven difficult because iron and iodine can react with each other, making each one less likely to be absorbed by the body. In this study, the MIT team showed that once they formed their iron-containing MOF particles, they could load them with iodine, in a way that the iron and iodine do not react with each other.

In tests of the particles’ stability, the researchers found that the NuMOFs could withstand long-term storage, high heat and humidity, and boiling water.

Throughout these tests, the particles maintained their structure. When the researchers then fed the particles to mice, they found that both iron and iodine became available in the bloodstream within several hours of the NuMOF consumption.

The researchers are now working on launching a company that is developing coffee and other beverages fortified with iron and iodine. They also hope to continue working toward a double-fortified salt that could be consumed on its own or incorporated into staple food products.

The research was partially supported by J-WAFS Fellowships for Water and Food Solutions. 

Other authors of the paper include Fangzheng Chen, Wenhao Gao, Zhiling Zheng, Tian Wang, Erika Yan Wang, Behnaz Eshaghi, and Sydney MacDonald.

This research was conducted, in part, using MIT.nano’s facilities.


Jessika Trancik named director of the Sociotechnical Systems Research Center

Trancik will lead multidisciplinary research center focused on the high-impact, complex, sociotechnical systems that shape our world.


Jessika Trancik, a professor in MIT’s Institute for Data, Systems, and Society, has been named the new director of the Sociotechnical Systems Research Center (SSRC), effective July 1. The SSRC convenes and supports researchers focused on problems and solutions at the intersection of technology and its societal impacts.

Trancik conducts research on technology innovation and energy systems. At the Trancik Lab, she and her team develop methods drawing on engineering knowledge, data science, and policy analysis. Their work examines the pace and drivers of technological change, helping identify where innovation is occurring most rapidly, how emerging technologies stack up against existing systems, and which performance thresholds matter most for real-world impact. Her models have been used to inform government innovation policy and have been applied across a wide range of industries.

“Professor Trancik’s deep expertise in the societal implications of technology, and her commitment to developing impactful solutions across industries, make her an excellent fit to lead SSRC,” says Maria C. Yang, interim dean of engineering and William E. Leonhard (1940) Professor of Mechanical Engineering.

Much of Trancik’s research focuses on the domain of energy systems, and establishing methods for energy technology evaluation, including of their costs, performance, and environmental impacts. She covers a wide range of energy services — including electricity, transportation, heating, and industrial processes. Her research has applications in solar and wind energy, energy storage, low-carbon fuels, electric vehicles, and nuclear fission. Trancik is also known for her research on extreme events in renewable energy availability.

A prolific researcher, Trancik has helped measure progress and inform the development of solar photovoltaics, batteries, electric vehicle charging infrastructure, and other low-carbon technologies — and anticipate future trends. One of her widely cited contributions includes quantifying learning rates and identifying where targeted investments can most effectively accelerate innovation. These tools have been used by U.S. federal agencies, international organizations, and the private sector to shape energy R&D portfolios, climate policy, and infrastructure planning.

Trancik is committed to engaging and informing the public on energy consumption. She and her team developed the app carboncounter.com, which helps users choose cars with low costs and low environmental impacts.

As an educator, Trancik teaches courses for students across MIT’s five schools and the MIT Schwarzman College of Computing.

“The question guiding my teaching and research is how do we solve big societal challenges with technology, and how can we be more deliberate in developing and supporting technologies to get us there?” Trancik said in an article about course IDS.521/IDS.065 (Energy Systems for Climate Change Mitigation).

Trancik received her undergraduate degree in materials science and engineering from Cornell University. As a Rhodes Scholar, she completed her PhD in materials science at the University of Oxford. She subsequently worked for the United Nations in Geneva, Switzerland, and the Earth Institute at Columbia University. After serving as an Omidyar Research Fellow at the Santa Fe Institute, she joined MIT in 2010 as a faculty member.

Trancik succeeds Fotini Christia, the Ford International Professor of Social Sciences in the Department of Political Science and director of IDSS, who previously served as director of SSRC.


Harvey Kent Bowen, ceramics scholar and MIT Leaders for Global Operations co-founder, dies at 83

Bowen’s innovative work helped transform ceramics and manufacturing education at MIT and beyond.


Harvey Kent Bowen PhD ’71, a longtime MIT professor celebrated for his pioneering work in manufacturing education, innovative ceramics research, and generous mentorship, died July 17 in Belmont, Massachusetts. He was 83.

At MIT, he was the founding engineering faculty leader of Leaders for Manufacturing (LFM) — now Leaders for Global Operations (LGO) — a program that continues to shape engineering and management education nearly four decades later.

Bowen spent 22 years on the MIT faculty, returning to his alma mater after earning a PhD in materials science and ceramics processing at the Institute. He held the Ford Professorship of Engineering, with appointments in the departments of Materials Science and Engineering (DMSE) and Electrical Engineering and Computer Science, before transitioning to Harvard Business School, where he bridged the worlds of engineering, manufacturing, and management. 

Bowen’s prodigious research output spans 190 articles, 45 Harvard case studies, and two books. In addition to his scholarly contributions, those who knew him best say his visionary understanding of the connection between management and engineering, coupled with his intellect and warm leadership style, set him apart at a time of rapid growth at MIT.  

A pioneering physical ceramics researcher

Bowen was born on Nov. 21, 1941, in Salt Lake City, Utah. As an MIT graduate student in the 1970s, he helped to redefine the study of ceramics — transforming it into the scientific field now known as physical ceramics, which focuses on the structure, properties, and behavior of ceramic materials.

“Prior to that, it was the art of ceramic composition,” says Michael Cima, the David H. Koch Professor of Engineering in DMSE. “What Kent and a small group of more-senior DMSE faculty were doing was trying to turn that art into science.”

Bowen advanced the field by applying scientific rigor to how ceramic materials were processed. He applied concepts from the developing field of colloid science — the study of particles evenly distributed in another material — to the manufacturing of ceramics, forever changing how such objects were made.

“That sparked a whole new generation of people taking a different look at how ceramic objects are manufactured,” Cima recalls. “It was an opportunity to make a big change. Despite the fact that physical ceramics — composition, crystal structure and so forth — had turned into a science, there still was this big gap: how do you make these things? Kent thought this was the opportunity for science to have an impact on the field of ceramics.”

One of his greatest scholarly accomplishments was “Introduction to Ceramics, 2nd edition,” with David Kingery and Donald Uhlmann, a foundational textbook he helped write early in his career. The book, published in 1976, helped maintain DMSE’s leading position in ceramics research and education.

“Every PhD student in ceramics studied that book, all 1,000 pages, from beginning to end, to prepare for the PhD qualifying exams,” says Yet-Ming Chiang, Kyocera Professor of Ceramics in DMSE. “It covered almost every aspect of the science and engineering of ceramics known at that time. That was why it was both an outstanding teaching text as well as a reference textbook for data.”

In ceramics processing, Bowen was also known for his control of particle size, shape, and size distribution, and how those factors influence sintering, the process of forming solid materials from powders.

Over time, Bowen’s interest in ceramics processing broadened into a larger focus on manufacturing. As such, Bowen was also deeply connected to industry and traveled frequently, especially to Japan, a leader in ceramics manufacturing.

“One time, he came back from Japan and told all of us graduate students that the students there worked so hard they were sleeping in the labs at night — as a way to prod us,” Chiang recalls.

While Bowen’s work in manufacturing began in ceramics, he also became a consultant to major companies, including automakers, and he worked with Lee Iacocca, the Ford executive behind the Mustang. Those experiences also helped spark LFM, which evolved into LGO. Bowen co-founded LFM with former MIT dean of engineering Tom Magnanti.

“I’m still in awe of Kent’s audacity and vision in starting the LFM program. The scale and scope of the program were, even for MIT standards, highly ambitious. Thirty-seven successful years later, we all owe a great sense of gratitude to Kent,” says LGO Executive Director Thomas Roemer, a senior lecturer at the MIT Sloan School of Management.

Bowen as mentor, teacher

Bowen’s scientific leadership was matched by his personal influence. Colleagues recall him as a patient, thoughtful mentor who valued creativity and experimentation.

“He had a lot of patience, and I think students benefited from that patience. He let them go in the directions they wanted to — and then helped them out of the hole when their experiments didn’t work. He was good at that,” Cima says.

His discipline was another hallmark of his character. Chiang was an undergraduate and graduate student when Bowen was a faculty member. He fondly recalls his tendency to get up early, a source of amusement for his 3.01 (Kinetics of Materials) class.

“One time, some students played a joke on him. They got to class before him, set up an electric griddle, and cooked breakfast in the classroom before he arrived,” says Chiang. “When we all arrived, it smelled like breakfast.”

Bowen took a personal interest in Chiang’s career trajectory, arranging for him to spend a summer in Bowen’s lab through the Undergraduate Research Opportunities Program. Funded by the Department of Energy, the project explored magnetohydrodynamics: shooting a high-temperature plasma made from coal fly ash into a magnetic field between ceramic electrodes to generate electricity.

“My job was just to sift the fly ash, but it opened my eyes to energy research,” Chiang recalls.

Later, when Chiang was an assistant professor at MIT, Bowen served on his career development committee. He was both encouraging and pragmatic.

“He pushed me to get things done — to submit and publish papers at a time when I really needed the push,” Chiang says. “After all the happy talk, he would say, ‘OK, by what date are you going to submit these papers?’ And that was what I needed.”

After leaving MIT, Bowen joined Harvard Business School (HBS), where he wrote numerous detailed case studies, including one on A123 Systems, a battery company Chiang co-founded in 2001. 

“He was very supportive of our work to commercialize battery technology, and starting new companies in energy and materials,” Chiang says.

Bowen was also a devoted mentor for LFM/LGO students, even while at HBS. Greg Dibb MBA ’04, SM ’04 recalls that Bowen agreed to oversee his work on the management philosophy known as the Toyota Production System (TPS) — a manufacturing system developed by the Japanese automaker — responding kindly to the young student’s outreach and inspiring him with methodical, real-world advice.

“By some miracle, he agreed and made the time to guide me on my thesis work. In the process, he became a mentor and a lifelong friend,” Dibb says. “He inspired me in his way of working and collaborating. He was a master thinker and listener, and he taught me by example through his Socratic style, asking me simple but difficult questions that required rigor of thought.

“I remember he asked me about my plan to learn about manufacturing and TPS. I came to him enthusiastically with a list of books I planned to read. He responded, ‘Do you think a world expert would read those books?’”   

In trying to answer that question, Dibb realized the best way to learn was to go to the factory floor.

“He had a passion for the continuous improvement of manufacturing and operations, and he taught me how to do it by being an observer and a listener just like him — all the time being inspired by his optimism, faith, and charity toward others.”

Faith was a cornerstone of Bowen’s life outside of academia. He served a mission for The Church of Jesus Christ of Latter-day Saints in the Central Germany Mission and held several leadership roles, including bishop of the Cambridge, Massachusetts Ward, stake president of the Cambridge Stake, mission president of the Tacoma, Washington Mission, and temple president of the Boston, Massachusetts Temple. 

An enthusiastic role model who inspired excellence

During early-morning conversations, Cima learned about Bowen’s growing interest in manufacturing, which would spur what is now LGO. Bowen eventually became recognized as an expert in the Toyota Production System, the company’s operational culture and practice which was a major influence on the LGO program’s curriculum design.

“I got to hear it from him — I was exposed to his early insights,” Cima says. “The fact that he would take the time every morning to talk to me — it was a huge influence.”

Bowen was a natural leader and set an example for others, Cima says.

“What is a leader? A leader is somebody who has the kind of infectious enthusiasm to convince others to work with them. Kent was really good at that,” Cima says. “What’s the way you learn leadership? Well, you’d look at how leaders behave. And really good leaders behave like Kent Bowen.”

MIT Sloan School of Management professor of the practice Zeynep Ton praises Bowen’s people skills and work ethic: “When you combine his belief in people with his ability to think big, something magical happens through the people Kent mentored. He always pushed us to do more,” Ton recalls. “Whenever I shared with Kent my research making an impact on a company, or my teaching making an impact on a student, his response was never just ‘good job.’ His next question was: ‘How can you make a bigger impact? Do you have the resources at MIT to do it? Who else can help you?’” 

A legacy of encouragement and drive

With this drive to do more, Bowen embodied MIT’s ethos, colleagues say.

“Kent Bowen embodies the MIT 'mens et manus' ['mind and hand'] motto professionally and personally as an inveterate experimenter in the lab, in the classroom, as an advisor, and in larger society,” says MIT Sloan senior lecturer Steve Spear. “Kent’s consistency was in creating opportunities to help people become their fullest selves, not only finding expression for their humanity greater than they could have achieved on their own, but greater than they might have even imagined on their own. An extraordinary number of people are directly in his debt because of this personal ethos — and even more have benefited from the ripple effect.”

Gregory Dibb, now a leader in the autonomous vehicle industry, is just one of them.

“Upon hearing of his passing, I immediately felt that I now have even more responsibility to step up and try to fill his shoes in sacrificing and helping others as he did — even if that means helping an unprepared and overwhelmed LGO grad student like me,” Dibb says.

Bowen is survived by his wife, Kathy Jones; his children, Natalie, Jennifer Patraiko, Melissa, Kirsten, and Jonathan; his sister, Kathlene Bowen; and six grandchildren. 


Planets without water could still produce certain liquids, a new study finds

Lab experiments show “ionic liquids” can form through common planetary processes and might be capable of supporting life even on waterless planets.


Water is essential for life on Earth. So, the liquid must be a requirement for life on other worlds. For decades, scientists’ definition of habitability on other planets has rested on this assumption.

But what makes some planets habitable might have very little to do with water. In fact, an entirely different type of liquid could conceivably support life in worlds where water can barely exist. That’s a possibility that MIT scientists raise in a study appearing this week in the Proceedings of the National Academy of Sciences.

From lab experiments, the researchers found that a type of fluid known as an ionic liquid can readily form from chemical ingredients that are also expected to be found on the surface of some rocky planets and moons. Ionic liquids are salts that exist in liquid form below about 100 degrees Celsius. The team’s experiments showed that a mixture of sulfuric acid and certain nitrogen-containing organic compounds produced such a liquid. On rocky planets, sulfuric acid may be a byproduct of volcanic activity, while nitrogen-containing compounds have been detected on several asteroids and planets in our solar system, suggesting the compounds may be present in other planetary systems.

The scientists propose that, even on planets that are too warm or that have atmospheres are too low-pressure to support liquid water, there could still be pockets of ionic liquid. And where there is liquid, there may be potential for life, though likely not anything that resembles Earth’s water-based beings.

Ionic liquids have extremely low vapor pressure and do not evaporate; they can form and persist at higher temperatures and lower pressures than what liquid water can tolerate. The researchers note that ionic liquid can be a hospitable environment for some biomolecules, such as certain proteins that can remain stable in the fluid.

“We consider water to be required for life because that is what’s needed for Earth life. But if we look at a more general definition, we see that what we need is a liquid in which metabolism for life can take place,” says Rachana Agrawal, who led the study as a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “Now if we include ionic liquid as a possibility, this can dramatically increase the habitability zone for all rocky worlds.”

The study’s MIT co-authors are Sara Seager, the Class of 1941 Professor of Planetary Sciences in the Department of Earth, Atmospheric and Planetary Sciences and a professor in the departments of Physics and of Aeronautics and Astronautics, along with Iaroslav Iakubivskyi, Weston Buchanan, Ana Glidden, and Jingcheng Huang. Co-authors also include Maxwell Seager of Worcester Polytechnic Institute, William Bains of Cardiff University, and Janusz Petkowski of Wroclaw University of Science and Technology, in Poland.

A liquid leap

The team’s work with ionic liquid grew out of an effort to search for signs of life on Venus, where clouds of sulfuric acid envelope the planet in a noxious haze. Despite its toxicity, Venus’ clouds may contain signs of life — a notion that scientists plan to test with upcoming missions to the planet’s atmosphere.

Agrawal and Seager, who is leading the Morning Star Missions to Venus, were investigating ways to collect and evaporate sulfuric acid. If a mission collects samples from Venus’ clouds, sulfuric acid would have to be evaporated away in order to reveal any residual organic compounds that could then be analyzed for signs of life.

The researchers were using their custom, low-pressure system designed to evaporate away excess sulfuric acid, to test evaporation of a solution of the acid and an organic compound, glycine. They found that in every case, while most of the liquid sulfuric acid evaporated, a stubborn layer of liquid always remained. They soon realized that sulfuric acid was chemically reacting with glycine, resulting in an exchange of hydrogen atoms from the acid to the organic compound. The result was a fluid mixture of salts, or ions, known as an ionic liquid, that persists as a liquid across a wide range of temperatures and pressures.

This accidental finding kickstarted an idea: Could ionic liquid form on planets that are too warm and host atmospheres too thin for water to exist?

“From there, we took the leap of imagination of what this could mean,” Agrawal says. “Sulfuric acid is found on Earth from volcanoes, and organic compounds have been found on asteroids and other planetary bodies. So, this led us to wonder if ionic liquids could potentially form and exist naturally on exoplanets.”

Rocky oases

On Earth, ionic liquids are mainly synthesized for industrial purposes. They do not occur naturally, except for in one specific case, in which the liquid is generated from the mixing of venoms produced by two rival species of ants.

The team set out to investigate what conditions ionic liquid could be naturally produced in, and over what range of temperatures and pressures. In the lab, they mixed sulfuric acid with various nitrogen-containing organic compounds. In previous work, Seager’s team had found that the compounds, some of which can be considered ingredients associated with life, are surprisingly stable in sulfuric acid.

“In high school, you learn that an acid wants to donate a proton,” Seager says. “And oddly enough, we knew from our past work with sulfuric acid (the main component of Venus’ clouds) and nitrogen-containing compounds, that a nitrogen wants to receive a hydrogen. It’s like one person’s trash is another person’s treasure.”

The reaction could produce a bit of ionic liquid if the sulfuric acid and nitrogen-containing organics were in a one-to-one ratio — a ratio that was not a focus of the prior work. For their new study, Seager and Agrawal mixed sulfuric acid with over 30 different nitrogen-containing organic compounds, across a range of temperatures and pressures, then observed whether ionic liquid formed when they evaporated away the sulfuric acid in various vials. They also mixed the ingredients onto basalt rocks, which are known to exist on the surface of many rocky planets.

Three chunks of rock

The team found that the reactions produced ionic liquid at temperatures up to 180 degrees Celsius and at extremely low pressures — much lower than that of the Earth’s atmosphere. Their results suggest that ionic liquid could naturally form on other planets where liquid water cannot exist, under the right conditions.

“We were just astonished that the ionic liquid forms under so many different conditions,” Seager says. “If you put the sulfuric acid and the organic on a rock, the excess sulfuric acid seeps into the rock pores, but you’re still left with a drop of ionic liquid on the rock. Whatever we tried, ionic liquid still formed.”

“We’re envisioning a planet warmer than Earth, that doesn’t have water, and at some point in its past or currently, it has to have had sulfuric acid, formed from volcanic outgassing,” Seager says. “This sulfuric acid has to flow over a little pocket of organics. And organic deposits are extremely common in the solar system.”

Then, she says, the resulting pockets of liquid could stay on the planet’s surface, potentially for years or millenia, where they could theoretically serve as small oases for simple forms of ionic-liquid-based life. Going forward, Seager’s team plans to investigate further, to see what biomolecules, and ingredients for life, might survive, and thrive, in ionic liquid.

“We just opened up a Pandora’s box of new research,” Seager says. “It’s been a real journey.”

This research was supported, in part, by the Sloan Foundation and the Volkswagen Foundation.


Surprisingly diverse innovations led to dramatically cheaper solar panels

New research can identify opportunities to drive down the cost of renewable energy systems, batteries, and many other technologies.


The cost of solar panels has dropped by more than 99 percent since the 1970s, enabling widespread adoption of photovoltaic systems that convert sunlight into electricity.

A new MIT study drills down on specific innovations that enabled such dramatic cost reductions, revealing that technical advances across a web of diverse research efforts and industries played a pivotal role.

The findings could help renewable energy companies make more effective R&D investment decisions and aid policymakers in identifying areas to prioritize to spur growth in manufacturing and deployment.

The researchers’ modeling approach shows that key innovations often originated outside the solar sector, including advances in semiconductor fabrication, metallurgy, glass manufacturing, oil and gas drilling, construction processes, and even legal domains.

“Our results show just how intricate the process of cost improvement is, and how much scientific and engineering advances, often at a very basic level, are at the heart of these cost reductions. A lot of knowledge was drawn from different domains and industries, and this network of knowledge is what makes these technologies improve,” says study senior author Jessika Trancik, a professor in MIT’s Institute for Data, Systems, and Society.

Trancik is joined on the paper by co-lead authors Goksin Kavlak, a former IDSS graduate student and postdoc who is now a senior energy associate at the Brattle Group; Magdalena Klemun, a former IDSS graduate student and postdoc who is now an assistant professor at Johns Hopkins University; former MIT postdoc Ajinkya Kamat; as well as Brittany Smith and Robert Margolis of the National Renewable Energy Laboratory. The research appears today in PLOS ONE.

Identifying innovations

This work builds on mathematical models that the researchers previously developed that tease out the effects of engineering technologies on the cost of photovoltaic (PV) modules and systems.

In this study, the researchers aimed to dig even deeper into the scientific advances that drove those cost declines.

They combined their quantitative cost model with a detailed, qualitative analysis of innovations that affected the costs of PV system materials, manufacturing steps, and deployment processes.

“Our quantitative cost model guided the qualitative analysis, allowing us to look closely at innovations in areas that are hard to measure due to a lack of quantitative data,” Kavlak says.

Building on earlier work identifying key cost drivers — such as the number of solar cells per module, wiring efficiency, and silicon wafer area — the researchers conducted a structured scan of the literature for innovations likely to affect these drivers. Next, they grouped these innovations to identify patterns, revealing clusters that reduced costs by improving materials or prefabricating components to streamline manufacturing and installation. Finally, the team tracked industry origins and timing for each innovation, and consulted domain experts to zero in on the most significant innovations.

All told, they identified 81 unique innovations that affected PV system costs since 1970, from improvements in antireflective coated glass to the implementation of fully online permitting interfaces.

“With innovations, you can always go to a deeper level, down to things like raw materials processing techniques, so it was challenging to know when to stop. Having that quantitative model to ground our qualitative analysis really helped,” Trancik says.

They chose to separate PV module costs from so-called balance-of-system (BOS) costs, which cover things like mounting systems, inverters, and wiring.

PV modules, which are wired together to form solar panels, are mass-produced and can be exported, while many BOS components are designed, built, and sold at the local level.

“By examining innovations both at the BOS level and within the modules, we identify the different types of innovations that have emerged in these two parts of PV technology,” Kavlak says.

BOS costs depend more on soft technologies, nonphysical elements such as permitting procedures, which have contributed significantly less to PV’s past cost improvement compared to hardware innovations.

“Often, it comes down to delays. Time is money, and if you have delays on construction sites and unpredictable processes, that affects these balance-of-system costs,” Trancik says.

Innovations such as automated permitting software, which flags code-compliant systems for fast-track approval, show promise. Though not yet quantified in this study, the team’s framework could support future analysis of their economic impact and similar innovations that streamline deployment processes.

Interconnected industries

The researchers found that innovations from the semiconductor, electronics, metallurgy, and petroleum industries played a major role in reducing both PV and BOS costs, but BOS costs were also impacted by innovations in software engineering and electric utilities.

Noninnovation factors, like efficiency gains from bulk purchasing and the accumulation of knowledge in the solar power industry, also reduced some cost variables.

In addition, while most PV panel innovations originated in research organizations or industry, many BOS innovations were developed by city governments, U.S. states, or professional associations.

“I knew there was a lot going on with this technology, but the diversity of all these fields and how closely linked they are, and the fact that we can clearly see that network through this analysis, was interesting,” Trancik says.

“PV was very well-positioned to absorb innovations from other industries — thanks to the right timing, physical compatibility, and supportive policies to adapt innovations for PV applications,” Klemun adds.

The analysis also reveals the role greater computing power could play in reducing BOS costs through advances like automated engineering review systems and remote site assessment software.

“In terms of knowledge spillovers, what we've seen so far in PV may really just be the beginning,” Klemun says, pointing to the expanding role of robotics and AI-driven digital tools in driving future cost reductions and quality improvements.

In addition to their qualitative analysis, the researchers demonstrated how this methodology could be used to estimate the quantitative impact of a particular innovation if one has the numerical data to plug into the cost equation.

For instance, using information about material prices and manufacturing procedures, they estimate that wire sawing, a technique which was introduced in the 1980s, led to an overall PV system cost decrease of $5 per watt by reducing silicon losses and increasing throughput during fabrication.

“Through this retrospective analysis, you learn something valuable for future strategy because you can see what worked and what didn’t work, and the models can also be applied prospectively. It is also useful to know what adjacent sectors may help support improvement in a particular technology,” Trancik says.

Moving forward, the researchers plan to apply this methodology to a wide range of technologies, including other renewable energy systems. They also want to further study soft technology to identify innovations or processes that could accelerate cost reductions.

“Although the process of technological innovation may seem like a black box, we’ve shown that you can study it just like any other phenomena,” Trancik says.

This research is funded, in part, by the U.S. Department of Energy Solar Energy Technologies Office.


Building a lifeline for family caregivers across the US

Ianacare, co-founded by Steven Lee ’97, MEng ’98, equips caregivers with the resources, networks, and tools they need to support loved ones.


There are 63 million people caring for family members with an illness or disability in the U.S. That translates to one in four adults devoting their time to helping loved ones with things like transportation, meals, prescriptions, and medical appointments.

Caregiving exacts a huge toll on the people responsible, and ianacare is seeking to lessen the burden. The company, founded by Steven Lee ’97, MEng ’98 and Jessica Kim, has built a platform that helps caregivers navigate available tools and local resources, build a network of friends and family to assist with everyday tasks, and coordinate meals, rides, and care shifts.

The name ianacare is short for “I am not alone care.” The company’s mission is to equip and empower the millions of people who perform a difficult and underappreciated role in our society.

“Family caregivers are the invisible backbone of the health care system,” Lee says. “Without them, the health care system would literally collapse, but they are still largely unrecognized. Ianacare acts as the front door for family caregivers. These caregivers are often thrust into this role untrained and unguided. But the moment they start, they have to become experts. Ianacare fills that gap.”

The company has partnered with employers and health care providers to serve more than 50,000 caregivers to date. And thanks to a partnerships with organizations like Elevance Health, the American Association of Retired Persons (AARP), and Medicare providers, its coordination and support tools are available to family caregivers across the country.

“Ultimately we want to make the biggest impact possible,” Lee says. “From a business standpoint, the 50,000 caregivers we’ve served is a huge number. But from the overall universe of caregivers that could use our help, it’s relatively small. We’re on a mission to help all 63 million caregivers.”

From ad tech to ianacare

As an electrical engineering and computer science student at MIT in the 1990s, Lee conducted research on early speech-recognition technology as part of the Spoken Language Systems group in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).

Following graduation, Lee started a company with Waikit Lau ’97 that optimized video advertising placement within streams. The company has gone through several mergers and acquisitions, but is now part of the public company Magnite, which places the ads on platforms like Netflix, Hulu, and Disney+.

Lee left the company in 2016 and began advising startups through programs including MIT’s Venture Mentoring Service as he looked to work on something he would find more meaningful.

“Over the years, the MIT network has been invaluable for connecting with customers, recruiting top talent, and engaging investors,” Lee says. “So much innovation flows out of MIT, and I’ve loved giving back, especially working alongside [VMS Venture Mentor] Paul Bosco ’95 and the rest of the VMS team. It’s deeply rewarding to share the best practices I’ve learned with the next generation of innovators.”

In 2017, Lee met Kim, who was caregiving for her mother with pancreatic cancer. Hearing about her experience brought him back to his own family’s challenges caring for his grandfather with Parkinson’s disease when Lee was a child.

“We realized the gaps that existed in caregiving support three decades ago still exist,” Lee says. “Nothing has changed.”

Officially launched in 2018, ianacare may seem far-removed from speech recognition or ad technologies, but Lee sees the work as an extension of his previous experiences.

“In my mind, AI got its start in speech recognition, and the intelligence we use to surface recommendations and create care plans for family caregivers uses a lot of the same statistical modeling techniques I used in speech recognition and ad placement,” Lee says. “It all goes back to the foundation I got at MIT.”

The founders first launched a free solution that allowed caregivers to connect with friends and family members to coordinate caregiving tasks.

“In our app, you can coordinate with anyone who’s interested in helping,” Lee says. “When you share a struggle with a friend or co-worker, they always say, ‘How can I help?’ But caregivers rarely go back to them and actually ask. In our platform, you can add those people to your informal care team and ask the team for help with something instead of having to text someone directly, which you’re less likely to do.”

Next, the founders built an enterprise solution so businesses could help employee caregivers, adding features like resource directories and ways to find and select various caregiving tools.

“An immense amount of local resources are available, but nobody knows about them,” Lee says. “For instance, every county in the country has an Area Agency on Aging, but these agencies aren’t marketing experts, and caregivers don’t know where to get guidance.”

Last year, ianacare began working with AARP and health care providers participating in the nationwide GUIDE model (for “Guiding an Improved Dementia Experience”) to improve the quality of life for dementia patients and their caregivers. Through the voluntary program, participants can use ianacare’s platform to coordinate care, access educational resources, and access free respite care up to $2,500 each year.

Lee says the CMS partnership gives ianacare a pathway to reach millions of people caring for dementia patients across the country.

“This is already a crisis, and it will get worse because we have an aging population and a capacity-constraint in our health care system,” Lee says. “The population above 65 is set to double between 2000 and 2040. We aren’t going to have three times the hospitals or three times the doctors or nurse practitioners. So, we can either make clinicians more efficient or move more health care into the home. That’s why we have empower family caregivers.”

Aging with dignity

Lee recalls one family who used ianacare after their son was born with a severe disease. The child only lived eight months, but for those eight months, the parents had meals delivered to them in the hospital by friends and family.

“It was not something they had to worry about the entire time their son was alive,” Lee says. “It’s been rewarding to help these people in so much need.”

Other ianacare users say the platform has helped them keep their parents out of the hospital and lessen their depression and anxiety around caregiving.

“Nobody wants to die in a hospital, so we’ve worked hard to honor the wishes of loved ones who want to age in the home,” Lee says. “We have a lot of examples of folks who, if our support was not there, their loved one would have had to enter a nursing home or institution. Ianacare is there to ensure the home is safe and that the caregiver can manage the care burden. It’s a win-win for everybody because it’s also less costly for the health care system.”


MIT School of Engineering faculty receive awards in spring 2025

Faculty members were honored in recognition of their scholarship, service, and overall excellence.


Each year, faculty and researchers across the MIT School of Engineering are recognized with prestigious awards for their contributions to research, technology, society, and education. To celebrate these achievements, the school periodically highlights select honors received by members of its departments, labs, and centers. The following individuals were recognized in spring 2025:

Markus Buehler, the Jerry McAfee (1940) Professor in Engineering in the Department of Civil and Environmental Engineering, received the Washington Award. The award honors engineers whose professional attainments have preeminently advanced the welfare of humankind.

Sili Deng, an associate professor in the Department of Mechanical Engineering, received the 2025 Hiroshi Tsuji Early Career Researcher Award. The award recognizes excellence in fundamental or applied combustion science research. Deng was honored for her work on energy conversion and storage, including combustion fundamentals, data-driven modeling of reacting flows, carbon-neutral energetic materials, and flame synthesis of materials for catalysis and energy storage.

Jonathan How, the Richard Cockburn Maclaurin Professor in Aeronautics and Astronautics, received the IEEE Transactions on Robotics King-Sun Fu Memorial Best Paper Award. The award recognizes the best paper published annually in the IEEE Transactions on Robotics for technical merit, originality, potential impact, clarity, and practical significance.

Richard Linares, the Rockwell International Career Development Professor in the Department of Aeronautics and Astronautics, received the 2024 American Astronautical Society Emerging Astrodynamicist Award. The award honors junior researchers making significant contributions to the field of astrodynamics.

Youssef Marzouk, the Breene M. Kerr (1951) Professor in the Department of Aeronautics and Astronautics, was named a fellow of the Society for Industrial and Applied Mathematics. He was honored for influential contributions to multiple aspects of uncertainty quantification, particularly Bayesian computation and measure transport.

Dava Newman, the director of the MIT Media Lab and the Apollo Program Professor in the Department of Aeronautics and Astronautics, received the Carolyn “Bo” Aldigé Visionary Award. The award was presented in recognition of the MIT Media Lab's women’s health program, WHx, for groundbreaking research in advancing women’s health.

Martin Rinard, a professor in the Department of Electrical Engineering and Computer Science, received the 2025 SIGSOFT Outstanding Research Award. The award recognizes his fundamental contributions in pioneering the new fields of program repair and approximate computing.

Franz-Josef Ulm, the Class of 1922 Professor in the Department of Civil and Environmental Engineering, was named an ASCE Distinguished Member. He was recognized for contributions to the nano- and micromechanics of heterogeneous materials, including cement, concrete, rock, and bone, with applications in sustainable infrastructure, underground energy harvesting, and human health.


Eco-driving measures could significantly reduce vehicle emissions

New research shows automatically controlling vehicle speeds to mitigate traffic at intersections can cut carbon emissions between 11 and 22 percent.


Any motorist who has ever waited through multiple cycles for a traffic light to turn green knows how annoying signalized intersections can be. But sitting at intersections isn’t just a drag on drivers’ patience — unproductive vehicle idling could contribute as much as 15 percent of the carbon dioxide emissions from U.S. land transportation.

A large-scale modeling study led by MIT researchers reveals that eco-driving measures, which can involve dynamically adjusting vehicle speeds to reduce stopping and excessive acceleration, could significantly reduce those CO2 emissions.

Using a powerful artificial intelligence method called deep reinforcement learning, the researchers conducted an in-depth impact assessment of the factors affecting vehicle emissions in three major U.S. cities.

Their analysis indicates that fully adopting eco-driving measures could cut annual city-wide intersection carbon emissions by 11 to 22 percent, without slowing traffic throughput or affecting vehicle and traffic safety.

Even if only 10 percent of vehicles on the road employ eco-driving, it would result in 25 to 50 percent of the total reduction in CO2 emissions, the researchers found.

In addition, dynamically optimizing speed limits at about 20 percent of intersections provides 70 percent of the total emission benefits. This indicates that eco-driving measures could be implemented gradually while still having measurable, positive impacts on mitigating climate change and improving public health.

Two intersections with lots of cars; the 100% adoption has less traffic.

“Vehicle-based control strategies like eco-driving can move the needle on climate change reduction. We’ve shown here that modern machine-learning tools, like deep reinforcement learning, can accelerate the kinds of analysis that support sociotechnical decision making. This is just the tip of the iceberg,” says senior author Cathy Wu, the Class of 1954 Career Development Associate Professor in Civil and Environmental Engineering (CEE) and the Institute for Data, Systems, and Society (IDSS) at MIT, and a member of the Laboratory for Information and Decision Systems (LIDS).

She is joined on the paper by lead author Vindula Jayawardana, an MIT graduate student; as well as MIT graduate students Ao Qu, Cameron Hickert, and Edgar Sanchez; MIT undergraduate Catherine Tang; Baptiste Freydt, a graduate student at ETH Zurich; and Mark Taylor and Blaine Leonard of the Utah Department of Transportation. The research appears in Transportation Research Part C: Emerging Technologies.

A multi-part modeling study

Traffic control measures typically call to mind fixed infrastructure, like stop signs and traffic signals. But as vehicles become more technologically advanced, it presents an opportunity for eco-driving, which is a catch-all term for vehicle-based traffic control measures like the use of dynamic speeds to reduce energy consumption.

In the near term, eco-driving could involve speed guidance in the form of vehicle dashboards or smartphone apps. In the longer term, eco-driving could involve intelligent speed commands that directly control the acceleration of semi-autonomous and fully autonomous vehicles through vehicle-to-infrastructure communication systems.

“Most prior work has focused on how to implement eco-driving. We shifted the frame to consider the question of should we implement eco-driving. If we were to deploy this technology at scale, would it make a difference?” Wu says.

To answer that question, the researchers embarked on a multifaceted modeling study that would take the better part of four years to complete.

They began by identifying 33 factors that influence vehicle emissions, including temperature, road grade, intersection topology, age of the vehicle, traffic demand, vehicle types, driver behavior, traffic signal timing, road geometry, etc.

“One of the biggest challenges was making sure we were diligent and didn’t leave out any major factors,” Wu says.

Then they used data from OpenStreetMap, U.S. geological surveys, and other sources to create digital replicas of more than 6,000 signalized intersections in three cities — Atlanta, San Francisco, and Los Angeles — and simulated more than a million traffic scenarios.

The researchers used deep reinforcement learning to optimize each scenario for eco-driving to achieve the maximum emissions benefits.

Reinforcement learning optimizes the vehicles’ driving behavior through trial-and-error interactions with a high-fidelity traffic simulator, rewarding vehicle behaviors that are more energy-efficient while penalizing those that are not.

The researchers cast the problem as a decentralized cooperative multi-agent control problem, where the vehicles cooperate to achieve overall energy efficiency, even among non-participating vehicles, and they act in a decentralized manner, avoiding the need for costly communication between vehicles.

However, training vehicle behaviors that generalize across diverse intersection traffic scenarios was a major challenge. The researchers observed that some scenarios are more similar to one another than others, such as scenarios with the same number of lanes or the same number of traffic signal phases.

As such, the researchers trained separate reinforcement learning models for different clusters of traffic scenarios, yielding better emission benefits overall.

But even with the help of AI, analyzing citywide traffic at the network level would be so computationally intensive it could take another decade to unravel, Wu says.

Instead, they broke the problem down and solved each eco-driving scenario at the individual intersection level.

“We carefully constrained the impact of eco-driving control at each intersection on neighboring intersections. In this way, we dramatically simplified the problem, which enabled us to perform this analysis at scale, without introducing unknown network effects,” she says.

Significant emissions benefits

When they analyzed the results, the researchers found that full adoption of eco-driving could result in intersection emissions reductions of between 11 and 22 percent.

These benefits differ depending on the layout of a city’s streets. A denser city like San Francisco has less room to implement eco-driving between intersections, offering a possible explanation for reduced emission savings, while Atlanta could see greater benefits given its higher speed limits.

Even if only 10 percent of vehicles employ eco-driving, a city could still realize 25 to 50 percent of the total emissions benefit because of car-following dynamics: Non-eco-driving vehicles would follow controlled eco-driving vehicles as they optimize speed to pass smoothly through intersections, reducing their carbon emissions as well.

In some cases, eco-driving could also increase vehicle throughput by minimizing emissions. However, Wu cautions that increasing throughput could result in more drivers taking to the roads, reducing emissions benefits.

And while their analysis of widely used safety metrics known as surrogate safety measures, such as time to collision, suggest that eco-driving is as safe as human driving, it could cause unexpected behavior in human drivers. More research is needed to fully understand potential safety impacts, Wu says.

Their results also show that eco-driving could provide even greater benefits when combined with alternative transportation decarbonization solutions. For instance, 20 percent eco-driving adoption in San Francisco would cut emission levels by 7 percent, but when combined with the projected adoption of hybrid and electric vehicles, it would cut emissions by 17 percent.

“This is a first attempt to systematically quantify network-wide environmental benefits of eco-driving. This is a great research effort that will serve as a key reference for others to build on in the assessment of eco-driving systems,” says Hesham Rakha, the Samuel L. Pritchard Professor of Engineering at Virginia Tech, who was not involved with this research.

And while the researchers focus on carbon emissions, the benefits are highly correlated with improvements in fuel consumption, energy use, and air quality.

“This is almost a free intervention. We already have smartphones in our cars, and we are rapidly adopting cars with more advanced automation features. For something to scale quickly in practice, it must be relatively simple to implement and shovel-ready. Eco-driving fits that bill,” Wu says.

This work is funded, in part, by Amazon and the Utah Department of Transportation.


School of Architecture and Planning welcomes new faculty for 2025

Four new professors join the Department of Architecture and MIT Media Lab.


Four new faculty members join the School of Architecture and Planning (SA+P) this fall, offering the MIT community creativity, knowledge, and scholarship in multidisciplinary roles.

“These individuals add considerable strength and depth to our faculty,” says Hashim Sarkis, dean of the School of Architecture and Planning. “We are excited for the academic vigor they bring to research and teaching.”

Karrie G. Karahalios ’94, MEng ’95, SM ’97, PhD ’04 joins the MIT Media Lab as a full professor of media arts and sciences. Karahalios is a pioneer in the exploration of social media and of how people communicate in environments that are increasingly mediated by algorithms that, as she has written, “shape the world around us.” Her work combines computing, systems, artificial intelligence, anthropology, sociology, psychology, game theory, design, and infrastructure studies. Karahalios’ work has received numerous honors including the National Science Foundation CAREER Award, Alfred P. Sloan Research Fellowship, SIGMOD Best Paper Award, and recognition as an ACM Distinguished Member.

Pat Pataranutaporn SM ’20, PhD ’24 joins the MIT Media Lab as an assistant professor of media arts and sciences. A visionary technologist, scientist, and designer, Pataranutaporn explores the frontier of human-AI interaction, inventing and investigating AI systems that support human thriving. His research focuses on how personalized AI systems can amplify human cognition, from learning and decision-making to self-development, reflection, and well-being. Pataranutaporn will co-direct the Advancing Humans with AI Program.

Mariana Popescu joins the Department of Architecture as an assistant professor with a shared appointment in the MIT Schwarzman College of Computing in the Department of Electrical Engineering and Computer Science. Popescu is a computational architect and structural designer with a strong interest and experience in innovative ways of approaching the fabrication process and use of materials in construction. Her area of expertise is computational and parametric design, with a focus on digital fabrication and sustainable design. Her extensive involvement in projects related to promoting sustainability has led to a multilateral development of skills, which combine the fields of architecture, engineering, computational design, and digital fabrication. Popescu earned her doctorate at ETH Zurich. She was named a “Pioneer” on the MIT Technology Review global list of “35 innovators under 35” in 2019.

Holly Samuelson joins the Department of Architecture as an associate professor in the Building Technology Program at MIT, teaching architectural technology courses. Her teaching and research focus on issues of building design that impact human and environmental health. Her current projects harness advanced building simulation to investigate issues of greenhouse gas emissions, heat vulnerability, and indoor environmental quality while considering the future of buildings in a changing electricity grid. Samuelson has co-authored over 40 peer-reviewed papers, winning a best paper award from the journal Energy and Building. As a recognized expert in architectural technology, she has been featured in news outlets including The Washington Post, The Boston Globe, the BBC, and The Wall Street Journal. Samuelson earned her doctor of design from Harvard University Graduate School of Design.


Professor Emeritus Peter Temin, influential and prolific economic historian, dies at 87

The longtime MIT scholar and former department head used the tools of economics to shed new light on historical events and their profound implications for today’s society.


Peter Temin PhD ’64, the MIT Elisha Gray II Professor of Economics, emeritus, passed away on Aug. 4. He was 87. 

Temin was a preeminent economic historian whose work spanned a remarkable range of topics, from the British Industrial Revolution and Roman economic history to the causes of the Great Depression and, later in his career, the decline of the American middle class. He also made important contributions to modernizing the field of economic history through his systematic use of economic theory and data analysis.

“Peter was a dedicated teacher and a wonderful colleague, who could bring economic history to life like few before or since,” says Jonathan Gruber, Ford Professor and chair of the Department of Economics. “As an undergraduate at MIT, I knew Peter as an engaging teacher and UROP [Undergraduate Research Opportunities Program] supervisor. Later, as a faculty member, I knew him as a steady and supportive colleague. A great person to talk to about everything, from research to politics to life at the Cape. Peter was the full package: a great scholar, a great teacher, and a dedicated public goods provider.”

When Temin began his career, the field of economic history was undergoing a reorientation within the profession. Led by giants like Paul Samuelson and Robert Solow, economics had become a more quantitative, mathematically rigorous discipline, and economic historians responded by embracing the new tools of economic theory and data collection. This “new economic history” (today also known as “cliometrics”) revolutionized the field by introducing statistical analysis and mathematical modeling to the study of the past. Temin was a pioneer of this new approach, using econometrics to reexamine key historical events and demonstrate how data analysis could lead to the overturning of long-held assumptions.

A prolific scholar who authored 17 books and edited six, Temin made important contributions to an incredibly diverse set of topics. “As kindly as he was brilliant, Peter was a unique type of academic,” says Harvard University Professor Claudia Goldin, a fellow economic historian and winner of the 2023 Nobel Prize in economic sciences. “He was a macroeconomist and an economic historian who later worked on today’s social problems. In between, he studied antitrust, health care, and the Roman economy.”

Temin’s earliest work focused on American industrial development during the 19th century and honed the signature approach that quickly made him a leading economic historian — combining rigorous economic theory with a deep understanding of historical context to reexamine the past. Temin was known for his extensive analysis of the Great Depression, which often challenged prevailing wisdom. By arguing that factors beyond monetary policy — including the gold standard and a decline in consumer spending — were critical drivers of the crisis, Temin helped recast how economists think about the catastrophe and the role of monetary policy in economic downturns.

As his career progressed, Temin’s work increasingly expanded to include the economic history of other regions and periods. His later work on the Great Depression placed a greater emphasis on the international context of the crisis, and he made significant contributions to our understanding of the drivers of the British Industrial Revolution and the nature of the Roman economy.

“Peter Temin was a giant in the field of economic history, with work touching every aspect of the field and original ideas backed by careful research,” says Daron Acemoglu, Institute Professor and recipient of the 2024 Nobel Prize in economics. “He challenged the modern view of the Industrial Revolution that emphasized technological changes in a few industries, pointing instead to a broader transformation of the British economy. He took on the famous historian of the ancient world, Moses Finley, arguing that slavery notwithstanding, markets in the Roman economy — especially land markets — worked. Peter’s influence and contributions have been long-lasting and will continue to be so.”

Temin was born in Philadelphia in 1937. His parents were activists who emphasized social responsibility, and his older brother, Howard, became a geneticist and virologist who shared the 1975 Nobel Prize in medicine. Temin received his BA from Swarthmore College in 1959 and went on to earn his PhD in Economics from MIT in 1964. He was a junior fellow of Harvard University’s Society of Fellows from 1962 to 1965.

Temin started his career as an assistant professor of industrial history at the MIT Sloan School of Management before being hired by the Department of Economics in 1967. He served as department chair from 1990t o 1993 and held the Elisha Gray II professorship from 1993 to 2009. Temin won a Guggenheim Fellowship in 2001, and served as president of the Economic History Association (1995-96) and the Eastern Economic Association (2001-02).

At MIT, Temin’s scholarly achievements were matched by a deep commitment to engaging students as a teacher and advisor. “As a researcher, Peter was able to zero in on the key questions around a topic and find answers where others had been flailing,” says Christina Romer, chair of the Council of Economic Advisers under President Obama and a former student and advisee. “As a teacher, he managed to draw sleepy students into a rousing discussion that made us think we had figured out the material on our own, when, in fact, he had been masterfully guiding us. And as a mentor, he was unfailingly supportive and generous with both his time and his vast knowledge of economic history. I feel blessed to have been one of his students.”

When he became the economics department head in 1990, Temin prioritized hiring newly-minted PhDs and other junior faculty. This foresight continues to pay dividends — his junior hires included Daron Acemoglu and Abhijit Banerjee, and he launched the recruiting of Bengt Holmström for a senior faculty position. All three went on to win Nobel Prizes and have been pillars of economics research and education at MIT.

Temin remained an active researcher and author after his retirement in 2009. Much of his later work turned toward the contemporary American economy and its deep-seated divisions. In his influential 2017 book, “The Vanishing Middle Class: Prejudice and Power in a Dual Economy,” he argued that the United States had become a “dual economy,” with a prosperous finance, technology, and electronics sector on one hand and, on the other, a low-wage sector characterized by stagnant opportunity.

“There are echoes of Temin’s later writings in current department initiatives, such as the Stone Center on Inequality and Shaping the Future of Work” notes Gruber. “Temin was in many ways ahead of the curve in treating inequality as an issue of central importance for our discipline.”

In “The Vanishing Middle Class,” Temin also explored the role that historical events, particularly the legacy of slavery and its aftermath, played in creating and perpetuating economic divides. He further explored these themes in his last book, “Never Together: The Economic History of a Segregated America,” published in 2022. While Temin was perhaps best known for his work applying modern economic tools to the past, this later work showed that he was no less adept at the inverse: using historical analysis to shed light on modern economic problems.

Temin was active with MIT Hillel throughout his career, and outside the Institute, he enjoyed staying active. He could often be seen walking or biking to MIT, and taking a walk around Jamaica Pond was a favorite activity in his last few months of life. Peter and his late wife Charlotte were also avid travelers and art collectors. He was a wonderful husband, father, and grandfather, who was deeply devoted to his family.

Temin is lovingly remembered by his daughter Elizabeth “Liz” Temin and three grandsons, Colin and Zachary Gibbons and Elijah Mendez. He was preceded in death by his wife, Charlotte Temin, a psychologist and educator, and his daughter, Melanie Temin Mendez.


Helping data storage keep up with the AI revolution

Storage systems from Cloudian, co-founded by an MIT alumnus, are helping businesses feed data-hungry AI models and agents at scale.


Artificial intelligence is changing the way businesses store and access their data. That’s because traditional data storage systems were designed to handle simple commands from a handful of users at once, whereas today, AI systems with millions of agents need to continuously access and process large amounts of data in parallel. Traditional data storage systems now have layers of complexity, which slows AI systems down because data must pass through multiple tiers before reaching the graphical processing units (GPUs) that are the brain cells of AI.

Cloudian, co-founded by Michael Tso ’93, SM ’93 and Hiroshi Ohta, is helping storage keep up with the AI revolution. The company has developed a scalable storage system for businesses that helps data flow seamlessly between storage and AI models. The system reduces complexity by applying parallel computing to data storage, consolidating AI functions and data onto a single parallel-processing platform that stores, retrieves, and processes scalable datasets, with direct, high-speed transfers between storage and GPUs and CPUs.

Cloudian’s integrated storage-computing platform simplifies the process of building commercial-scale AI tools and gives businesses a storage foundation that can keep up with the rise of AI.

“One of the things people miss about AI is that it’s all about the data,” Tso says. “You can’t get a 10 percent improvement in AI performance with 10 percent more data or even 10 times more data — you need 1,000 times more data. Being able to store that data in a way that’s easy to manage, and in such a way that you can embed computations into it so you can run operations while the data is coming in without moving the data — that’s where this industry is going.”

From MIT to industry

As an undergraduate at MIT in the 1990s, Tso was introduced by Professor William Dally to parallel computing — a type of computation in which many calculations occur simultaneously. Tso also worked on parallel computing with Associate Professor Greg Papadopoulos.

“It was an incredible time because most schools had one super-computing project going on — MIT had four,” Tso recalls.

As a graduate student, Tso worked with MIT senior research scientist David Clark, a computing pioneer who contributed to the internet’s early architecture, particularly the transmission control protocol (TCP) that delivers data between systems.

“As a graduate student at MIT, I worked on disconnected and intermittent networking operations for large scale distributed systems,” Tso says. “It’s funny — 30 years on, that’s what I’m still doing today.”

Following his graduation, Tso worked at Intel’s Architecture Lab, where he invented data synchronization algorithms used by Blackberry. He also created specifications for Nokia that ignited the ringtone download industry. He then joined Inktomi, a startup co-founded by Eric Brewer SM ’92, PhD ’94 that pioneered search and web content distribution technologies.

In 2001, Tso started Gemini Mobile Technologies with Joseph Norton ’93, SM ’93 and others. The company went on to build the world’s largest mobile messaging systems to handle the massive data growth from camera phones. Then, in the late 2000s, cloud computing became a powerful way for businesses to rent virtual servers as they grew their operations. Tso noticed the amount of data being collected was growing far faster than the speed of networking, so he decided to pivot the company.

“Data is being created in a lot of different places, and that data has its own gravity: It’s going to cost you money and time to move it,” Tso explains. “That means the end state is a distributed cloud that reaches out to edge devices and servers. You have to bring the cloud to the data, not the data to the cloud.”

Tso officially launched Cloudian out of Gemini Mobile Technologies in 2012, with a new emphasis on helping customers with scalable, distributed, cloud-compatible data storage.

“What we didn’t see when we first started the company was that AI was going to be the ultimate use case for data on the edge,” Tso says.

Although Tso’s research at MIT began more than two decades ago, he sees strong connections between what he worked on and the industry today.

“It’s like my whole life is playing back because David Clark and I were dealing with disconnected and intermittently connected networks, which are part of every edge use case today, and Professor Dally was working on very fast, scalable interconnects,” Tso says, noting that Dally is now the senior vice president and chief scientist at the leading AI company NVIDIA. “Now, when you look at the modern NVIDIA chip architecture and the way they do interchip communication, it’s got Dally’s work all over it. With Professor Papadopoulos, I worked on accelerate application software with parallel computing hardware without having to rewrite the applications, and that’s exactly the problem we are trying to solve with NVIDIA. Coincidentally, all the stuff I was doing at MIT is playing out.”

Today Cloudian’s platform uses an object storage architecture in which all kinds of data —documents, videos, sensor data — are stored as a unique object with metadata. Object storage can manage massive datasets in a flat file stucture, making it ideal for unstructured data and AI systems, but it traditionally hasn’t been able to send data directly to AI models without the data first being copied into a computer’s memory system, creating latency and energy bottlenecks for businesses.

In July, Cloudian announced that it has extended its object storage system with a vector database that stores data in a form which is immediately usable by AI models. As the data are ingested, Cloudian is computing in real-time the vector form of that data to power AI tools like recommender engines, search, and AI assistants. Cloudian also announced a partnership with NVIDIA that allows its storage system to work directly with the AI company’s GPUs. Cloudian says the new system enables even faster AI operations and reduces computing costs.

“NVIDIA contacted us about a year and a half ago because GPUs are useful only with data that keeps them busy,” Tso says. “Now that people are realizing it’s easier to move the AI to the data than it is to move huge datasets. Our storage systems embed a lot of AI functions, so we’re able to pre- and post-process data for AI near where we collect and store the data.”

AI-first storage

Cloudian is helping about 1,000 companies around the world get more value out of their data, including large manufacturers, financial service providers, health care organizations, and government agencies.

Cloudian’s storage platform is helping one large automaker, for instance, use AI to determine when each of its manufacturing robots need to be serviced. Cloudian is also working with the National Library of Medicine to store research articles and patents, and the National Cancer Database to store DNA sequences of tumors — rich datasets that AI models could process to help research develop new treatments or gain new insights.

“GPUs have been an incredible enabler,” Tso says. “Moore’s Law doubles the amount of compute every two years, but GPUs are able to parallelize operations on chips, so you can network GPUs together and shatter Moore’s Law. That scale is pushing AI to new levels of intelligence, but the only way to make GPUs work hard is to feed them data at the same speed that they compute — and the only way to do that is to get rid of all the layers between them and your data.”


AI helps chemists develop tougher plastics

Researchers created polymers that are more resistant to tearing by incorporating stress-responsive molecules identified by a machine-learning model.


A new strategy for strengthening polymer materials could lead to more durable plastics and cut down on plastic waste, according to researchers at MIT and Duke University.

Using machine learning, the researchers identified crosslinker molecules that can be added to polymer materials, allowing them to withstand more force before tearing. These crosslinkers belong to a class of molecules known as mechanophores, which change their shape or other properties in response to mechanical force.

“These molecules can be useful for making polymers that would be stronger in response to force. You apply some stress to them, and rather than cracking or breaking, you instead see something that has higher resilience,” says Heather Kulik, the Lammot du Pont Professor of Chemical Engineering at MIT, who is also a professor of chemistry and the senior author of the study.

The crosslinkers that the researchers identified in this study are iron-containing compounds known as ferrocenes, which until now had not been broadly explored for their potential as mechanophores. Experimentally evaluating a single mechanophore can take weeks, but the researchers showed that they could use a machine-learning model to dramatically speed up this process.

MIT postdoc Ilia Kevlishvili is the lead author of the open-access paper, which appeared Friday in ACS Central Science. Other authors include Jafer Vakil, a Duke graduate student; David Kastner and Xiao Huang, both MIT graduate students; and Stephen Craig, a professor of chemistry at Duke.

The weakest link

Mechanophores are molecules that respond to force in unique ways, typically by changing their color, structure, or other properties. In the new study, the MIT and Duke team wanted to investigate whether they could be used to help make polymers more resilient to damage.

The new work builds on a 2023 study from Craig and Jeremiah Johnson, the A. Thomas Guertin Professor of Chemistry at MIT, and their colleagues. In that work, the researchers found that, surprisingly, incorporating weak crosslinkers into a polymer network can make the overall material stronger. When materials with these weak crosslinkers are stretched to the breaking point, any cracks propagating through the material try to avoid the stronger bonds and go through the weaker bonds instead. This means the crack has to break more bonds than it would if all of the bonds were the same strength.

To find new ways to exploit that phenomenon, Craig and Kulik joined forces to try to identify mechanophores that could be used as weak crosslinkers.

“We had this new mechanistic insight and opportunity, but it came with a big challenge: Of all possible compositions of matter, how do we zero in on the ones with the greatest potential?” Craig says. “Full credit to Heather and Ilia for both identifying this challenge and devising an approach to meet it.”

Discovering and characterizing mechanophores is a difficult task that requires either time-consuming experiments or computationally intense simulations of molecular interactions. Most of the known mechanophores are organic compounds, such as cyclobutane, which was used as a crosslinker in the 2023 study.

In the new study, the researchers wanted to focus on molecules known as ferrocenes, which are believed to hold potential as mechanophores. Ferrocenes are organometallic compounds that have an iron atom sandwiched between two carbon-containing rings. Those rings can have different chemical groups added to them, which alter their chemical and mechanical properties.

Many ferrocenes are used as pharmaceuticals or catalysts, and a handful are known to be good mechanophores, but most have not been evaluated for that use. Experimental tests on a single potential mechanophore can take several weeks, and computational simulations, while faster, still take a couple of days. Evaluating thousands of candidates using these strategies is a daunting task.

Realizing that a machine-learning approach could dramatically speed up the characterization of these molecules, the MIT and Duke team decided to use a neural network to identify ferrocenes that could be promising mechanophores.

They began with information from a database known as the Cambridge Structural Database, which contains the structures of 5,000 different ferrocenes that have already been synthesized.

“We knew that we didn’t have to worry about the question of synthesizability, at least from the perspective of the mechanophore itself. This allowed us to pick a really large space to explore with a lot of chemical diversity, that also would be synthetically realizable,” Kevlishvili says.

First, the researchers performed computational simulations for about 400 of these compounds, allowing them to calculate how much force is necessary to pull atoms apart within each molecule. For this application, they were looking for molecules that would break apart quickly, as these weak links could make polymer materials more resistant to tearing.

Then they used this data, along with information on the structure of each compound, to train a machine-learning model. This model was able to predict the force needed to activate the mechanophore, which in turn influences resistance to tearing, for the remaining 4,500 compounds in the database, plus an additional 7,000 compounds that are similar to those in the database but have some atoms rearranged.

The researchers discovered two main features that seemed likely to increase tear resistance. One was interactions between the chemical groups that are attached to the ferrocene rings. Additionally, the presence of large, bulky molecules attached to both rings of the ferrocene made the molecule more likely to break apart in response to applied forces.

While the first of these features was not surprising, the second trait was not something a chemist would have predicted beforehand, and could not have been detected without AI, the researchers say. “This was something truly surprising,” Kulik says.

Tougher plastics

Once the researchers identified about 100 promising candidates, Craig’s lab at Duke synthesized a polymer material incorporating one of them, known as m-TMS-Fc. Within the material, m-TMS-Fc acts as a crosslinker, connecting the polymer strands that make up polyacrylate, a type of plastic.

By applying force to each polymer until it tore, the researchers found that the weak m-TMS-Fc linker produced a strong, tear-resistant polymer. This polymer turned out to be about four times tougher than polymers made with standard ferrocene as the crosslinker.

“That really has big implications because if we think of all the plastics that we use and all the plastic waste accumulation, if you make materials tougher, that means their lifetime will be longer. They will be usable for a longer period of time, which could reduce plastic production in the long term,” Kevlishvili says.

The researchers now hope to use their machine-learning approach to identify mechanophores with other desirable properties, such as the ability to change color or become catalytically active in response to force. Such materials could be used as stress sensors or switchable catalysts, and they could also be useful for biomedical applications such as drug delivery.

In those studies, the researchers plan to focus on ferrocenes and other metal-containing mechanophores that have already been synthesized but whose properties are not fully understood.

“Transition metal mechanophores are relatively underexplored, and they’re probably a little bit more challenging to make,” Kulik says. “This computational workflow can be broadly used to enlarge the space of mechanophores that people have studied.”

The research was funded by the National Science Foundation Center for the Chemistry of Molecularly Optimized Networks (MONET).


Youssef Marzouk appointed associate dean of MIT Schwarzman College of Computing

AeroAstro professor and outgoing co-director of the Center for Computational Science and Engineering will play a vital role in fostering community for bilingual computing faculty.


Youssef Marzouk ’97, SM ’99, PhD ’04, the Breene M. Kerr (1951) Professor in the Department of Aeronautics and Astronautics (AeroAstro) at MIT, has been appointed associate dean of the MIT Schwarzman College of Computing, effective July 1.

Marzouk, who has served as co-director of the Center for Computational Science and Engineering (CCSE) since 2018, will work in his new role to foster a stronger community among bilingual computing faculty across MIT. A key aspect of this work will be providing additional structure and support for faculty members who have been hired into shared positions in departments and the college.

Shared faculty at MIT represent a new generation of scholars whose research and teaching integrate the forefront of computing and another discipline (positions that were initially envisioned as “bridge faculty” in the 2019 Provost’s Task Force reports). Since 2021, the MIT Schwarzman College of Computing has been steadily growing this cohort. In collaboration with 24 departments across the Institute, 20 faculty have been hired in shared positions: three in the School of Architecture and Planning; four in the School of Engineering; seven in the School of Humanities, Arts, and Social Sciences; four in the School of Science; and two in the MIT Sloan School of Management.

“Youssef’s experience leading cross-cutting efforts in research and education in CCSE is of direct relevance to the broader goal of bringing MIT’s computing bilinguals together in meaningful ways. His insights and collaborative spirit position him to make a lasting impact in this role. We are delighted to welcome him to this new leadership position in the college,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science.

“I’m excited that Youssef has agreed to take on this important role in the college. His thoughtful approach and nuanced understanding of MIT’s academic landscape make him ideally suited to support our shared faculty community. I look forward to working closely with him,” says Asu Ozdaglar, deputy dean of the MIT Schwarzman College of Computing, head of the Department of Electrical Engineering and Computer Science (EECS), and the MathWorks Professor of EECS.

Marzouk’s research interests lie at the intersection of computational mathematics, statistical inference, and physical modeling. He and his students develop and analyze new methodologies for uncertainty quantification, Bayesian computation, and machine learning in complex physical systems. His recent work has centered on algorithms for data assimilation and inverse problems; high-dimensional learning and surrogate modeling; optimal experimental design; and transportation of measure as a tool for statistical inference and generative modeling. He is strongly motivated by the interplay between theory, methods, and diverse applications, and has collaborated with other researchers at MIT on topics ranging from materials science to fusion energy to the geosciences.

In 2018, he was appointed co-director of CCSE with Nicolas Hadjiconstantinou, the Quentin Berg Professor of Mechanical Engineering. An interdisciplinary research and education center dedicated to advancing innovative computational methods and applications, CCSE became one of the academic units of the MIT Schwarzman College of Computing when it formally launched in 2020.

CCSE has grown significantly under Marzouk and Hadjiconstantinou’s leadership. Most recently, they spearheaded the design and launch of the center’s new standalone PhD program in computational science and engineering, which will welcome its second cohort in September. Collectively, CCSE’s standalone and interdisciplinary PhD programs currently enroll more than 70 graduate students.

Marzouk is also a principal investigator in the MIT Laboratory for Information and Decision Systems, and a core member of MIT’s Statistics and Data Science Center.

Among his many honors and awards, he was named a fellow of the Society for Industrial and Applied Mathematics (SIAM) in 2025. He was elected associate fellow of the American Institute of Aeronautics and Astronautics (AIAA) in 2018 and received the National Academy of Engineering Frontiers of Engineering Award in 2012, the MIT Junior Bose Award for Teaching Excellence in 2012, and the DOE Early Career Research Award in 2010. His recent external engagement includes service on multiple journal editorial boards; co-chairing major SIAM conferences and elected service on various SIAM committees; leadership of scientific advisory boards, including that of the Institute for Computational and Experimental Research in Mathematics (ICERM); and organizing many other international programs and workshops.

At MIT, in addition to co-directing CCSE, Marzouk has served as both graduate and undergraduate officer of the Department of AeroAstro. He also leads the MIT Center for the Exascale Simulation of Materials in Extreme Environments, an interdisciplinary computing effort sponsored by the U.S. Department of Energy’s Predictive Science Academic Alliance program.

Marzouk received his bachelor’s, master’s, and doctoral degrees from MIT. He spent four years at Sandia National Laboratories, as a Truman Fellow and a member of the technical staff, before joining the MIT faculty in 2009.


Ushering in a new era of suture-free tissue reconstruction for better healing

MIT spinout Tissium recently secured FDA marketing authorization of a biopolymer platform for nerve repair.


When surgeons repair tissues, they’re currently limited to mechanical solutions like sutures and staples, which can cause their own damage, or meshes and glues that may not adequately bond with tissues and can be rejected by the body.

Now, Tissium is offering surgeons a new solution based on a biopolymer technology first developed at MIT. The company’s flexible, biocompatible polymers conform to surrounding tissues, attaching to them in order to repair torn tissue after being activated using blue light.

“Our goal is to make this technology the new standard in fixation,” says Tissium co-founder Maria Pereira, who began working with polymers as a PhD student through the MIT Portugal Program. “Surgeons have been using sutures, staples, or tacks for decades or centuries, and they’re quite penetrating. We’re trying to help surgeons repair tissues in a less traumatic way.”

In June, Tissium reached a major milestone when it received marketing authorization from the Food and Drug Administration for its non-traumatic, sutureless solution to repair peripheral nerves. The FDA’s De Novo marketing authorization acknowledges the novelty of the company’s platform and enables commercialization of the MIT spinout’s first product. It came after studies showing the platform helped patients regain full flexion and extension of their injured fingers or toes without pain.

Tissium’s polymers can work with a range of tissue types, from nerves to cardiovascular and the abdominal walls, and the company is eager to apply its programmable platform to other areas.

“We really think this approval is just the beginning,” Tissium CEO Christophe Bancel says. “It was a critical step, and it wasn’t easy, but we knew if we could get the first one, it would begin a new phase for the company. Now it’s our responsibility to show this works with other applications and can benefit more patients.”

From lab to patients

Years before he co-founded Tissium, Jeff Karp was a postdoc in the lab of MIT Institute Professor Robert Langer, where he worked to develop elastic materials that were biodegradable and photocurable for a range of clinical applications. After graduation, Karp became an affiliate faculty member in the Harvard-MIT Program in Health Sciences and Technology. He is also a faculty member at Harvard Medical School and Brigham and Women’s Hospital. In 2008, Pereira joined Karp’s lab as a visiting PhD student through funding from the MIT Portugal Program, tuning the polymers’ thickness and ability to repel water to optimize the material’s ability to attach to wet tissue.

“Maria took this polymer platform and turned it into a fixation platform that could be used in many areas in medicine,” Karp recalls. “[The cardiac surgeon] Pedro del Nido at Boston Children’s Hospital had alerted us to this major problem of a birth defect that causes holes in the heart of newborns. There were no suitable solutions, so that was one of the applications we began working on that Maria led.”

Pereira and her collaborators went on to demonstrate they could use the biopolymers to seal holes in the hearts of rats and pigs without bleeding or complications. Bancel, a pharmaceutical industry veteran, was introduced to the technology when he met with Karp, Pereira, and Langer during a visit to Cambridge in 2012, and he spent the next few months speaking with surgeons.

“I spoke with about 15 surgeons from a range of fields about their challenges,” Bancel says. “I realized if the technology could work in these settings, it would address a big set of challenges. All of the surgeons were excited about how the material could impact their practice.”

Bancel worked with MIT’s Technology Licensing Office to take the biopolymer technology out of the lab, including patents from Karp’s original work in Langer’s lab. Pereira moved to Paris upon completing her PhD, and Tissium was officially founded in 2013 by Pereira, Bancel, Karp, Langer, and others.

“The MIT and Harvard ecosystems are at the core of our success,” Pereira says. “From the get-go, we tried to solve problems that would be meaningful for patients. We weren’t just doing research for the sake of doing research. We started in the cardiovascular space, but we quickly realized we wanted to create new standards for tissue repair and tissue fixation.”

After licensing the technology, Tissium had a lot of work to do to make it scalable commercially. The founders partnered with companies that specialize in synthesizing polymers and created a method to 3D print a casing for polymer-wrapped nerves.

“We quickly realized the product is a combination of the polymer and the accessories,” Bancel says. “It was about how surgeons used the product. We had to design the right accessories for the right procedures.”

The new system is sorely needed. A recent meta-analysis of nerve repairs using sutures found that only 54 percent of patients achieved highly meaningful recovery following surgery. By not using sutures, Tissium’s flexible polymer technology offers an atraumatic way to reconnect nerves. In a recent trial of 12 patients, all patients that completed follow up regained full flexion and extension of their injured digits and reported no pain 12 months after surgery.

“The current standard of care is suboptimal,” Pereira says. “There are variabilities in the outcome, sutures can create trauma, tension, misalignment, and all that can impact patient outcomes, from sensation to motor function and overall quality of life.”

Trauma-free tissue repair

Today Tissium has six products in development, including one ongoing clinical trial in the hernia space and another set to begin soon for a cardiovascular application.

“Early on, we had the intuition that if this were to work in one application, it would be surprising if it didn’t work in many other applications,” Bancel says.

The company also believes its 3D-printed production process will make it easier to expand.

“Not only can this be used for tissue fixation broadly across medicine, but we can leverage the 3D printing method to make all kinds of implantable medical devices from the same polymeric platform,” Karp explains. “Our polymers are programmable, so we can program the degradation, the mechanical properties, and this could open up the door to other exciting breakthroughs in medical devices with new capabilities.”

Now Tissium’s team is encouraging people in the medical field to reach out if they think their platform could improve on the standard of care — and they’re mindful that the first approval is a milestone worth celebrating unto itself.

“It’s the best possible outcome for your research to generate not just a paper, but a treatment with potential to improve the standard of care along with patients’ lives,” Karp says. “It’s the dream, and it’s an incredible feeling to be able to celebrate this with all the collaborators that have been involved along the way.”

Langer adds, “I agree with Jeff. It’s wonderful to see the research we started at MIT reach the point of FDA approval and change people’s lives.”


How the brain distinguishes oozing fluids from solid objects

A new study finds parts of the brain’s visual cortex are specialized to analyze either solid objects or flowing materials like water or sand.


Imagine a ball bouncing down a flight of stairs. Now think about a cascade of water flowing down those same stairs. The ball and the water behave very differently, and it turns out that your brain has different regions for processing visual information about each type of physical matter.

In a new study, MIT neuroscientists have identified parts of the brain’s visual cortex that respond preferentially when you look at “things” — that is, rigid or deformable objects like a bouncing ball. Other brain regions are more activated when looking at “stuff” — liquids or granular substances such as sand.

This distinction, which has never been seen in the brain before, may help the brain plan how to interact with different kinds of physical materials, the researchers say.

“When you’re looking at some fluid or gooey stuff, you engage with it in different way than you do with a rigid object. With a rigid object, you might pick it up or grasp it, whereas with fluid or gooey stuff, you probably are going to have to use a tool to deal with it,” says Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience; a member of the McGovern Institute for Brain Research and MIT’s Center for Brains, Minds, and Machines; and the senior author of the study.

MIT postdoc Vivian Paulun, who is joining the faculty of the University of Wisconsin at Madison this fall, is the lead author of the paper, which appears today in the journal Current Biology. RT Pramod, an MIT postdoc, and Josh Tenenbaum, an MIT professor of brain and cognitive sciences, are also authors of the study.

Stuff vs. things

Decades of brain imaging studies, including early work by Kanwisher, have revealed regions in the brain’s ventral visual pathway that are involved in recognizing the shapes of 3D objects, including an area called the lateral occipital complex (LOC). A region in the brain’s dorsal visual pathway, known as the frontoparietal physics network (FPN), analyzes the physical properties of materials, such as mass or stability.

Although scientists have learned a great deal about how these pathways respond to different features of objects, the vast majority of these studies have been done with solid objects, or “things.”

“Nobody has asked how we perceive what we call ‘stuff’ — that is, liquids or sand, honey, water, all sorts of gooey things. And so we decided to study that,” Paulun says.

These gooey materials behave very differently from solids. They flow rather than bounce, and interacting with them usually requires containers and tools such as spoons. The researchers wondered if these physical features might require the brain to devote specialized regions to interpreting them.

To explore how the brain processes these materials, Paulun used a software program designed for visual effects artists to create more than 100 video clips showing different types of things or stuff interacting with the physical environment. In these videos, the materials could be seen sloshing or tumbling inside a transparent box, being dropped onto another object, or bouncing or flowing down a set of stairs.

The researchers used functional magnetic resonance imaging (fMRI) to scan the visual cortex of people as they watched the videos. They found that both the LOC and the FPN respond to “things” and “stuff,” but that each pathway has distinctive subregions that respond more strongly to one or the other.

“Both the ventral and the dorsal visual pathway seem to have this subdivision, with one part responding more strongly to ‘things,’ and the other responding more strongly to ‘stuff,’” Paulun says. “We haven’t seen this before because nobody has asked that before.”

Roland Fleming, a professor of experimental psychology at Justus Liebig University of Geissen, described the findings as a “major breakthrough in the scientific understanding of how our brains represent the physical properties of our surrounding world.”

“We’ve known the distinction exists for a long time psychologically, but this is the first time that it’s been really mapped onto separate cortical structures in the brain. Now we can investigate the different computations that the distinct brain regions use to process and represent objects and materials,” says Fleming, who was not involved in the study.

Physical interactions

The findings suggest that the brain may have different ways of representing these two categories of material, similar to the artificial physics engines that are used to create video game graphics. These engines usually represent a 3D object as a mesh, while fluids are represented as sets of particles that can be rearranged.

“The interesting hypothesis that we can draw from this is that maybe the brain, similar to artificial game engines, has separate computations for representing and simulating ‘stuff’ and ‘things.’ And that would be something to test in the future,” Paulun says.

The researchers also hypothesize that these regions may have developed to help the brain understand important distinctions that allow it to plan how to interact with the physical world. To further explore this possibility, the researchers plan to study whether the areas involved in processing rigid objects are also active when a brain circuit involved in planning to grasp objects is active.

They also hope to look at whether any of the areas within the FPN correlate with the processing of more specific features of materials, such as the viscosity of liquids or the bounciness of objects. And in the LOC, they plan to study how the brain represents changes in the shape of fluids and deformable substances.

The research was funded by the German Research Foundation, the U.S. National Institutes of Health, and a U.S. National Science Foundation grant to the Center for Brains, Minds, and Machines.