General news from the MIT - Massachusetts Institute of Technology University

Here you find the recent daily general news of the the MIT - Massachusetts Institute of Technology University

MIT News
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
New tool evaluates progress in reinforcement learning

“IntersectionZoo,” a benchmarking tool, uses a real-world traffic problem to test progress in deep reinforcement learning algorithms.


If there’s one thing that characterizes driving in any major city, it’s the constant stop-and-go as traffic lights change and as cars and trucks merge and separate and turn and park. This constant stopping and starting is extremely inefficient, driving up the amount of pollution, including greenhouse gases, that gets emitted per mile of driving. 

One approach to counter this is known as eco-driving, which can be installed as a control system in autonomous vehicles to improve their efficiency.

How much of a difference could that make? Would the impact of such systems in reducing emissions be worth the investment in the technology? Addressing such questions is one of a broad category of optimization problems that have been difficult for researchers to address, and it has been difficult to test the solutions they come up with. These are problems that involve many different agents, such as the many different kinds of vehicles in a city, and different factors that influence their emissions, including speed, weather, road conditions, and traffic light timing.

“We got interested a few years ago in the question: Is there something that automated vehicles could do here in terms of mitigating emissions?” says Cathy Wu, the Thomas D. and Virginia W. Cabot Career Development Associate Professor in the Department of Civil and Environmental Engineering and the Institute for Data, Systems, and Society (IDSS) at MIT, and a principal investigator in the Laboratory for Information and Decision Systems. “Is it a drop in the bucket, or is it something to think about?,” she wondered.

To address such a question involving so many components, the first requirement is to gather all available data about the system, from many sources. One is the layout of the network’s topology, Wu says, in this case a map of all the intersections in each city. Then there are U.S. Geological Survey data showing the elevations, to determine the grade of the roads. There are also data on temperature and humidity, data on the mix of vehicle types and ages, and on the mix of fuel types.

Eco-driving involves making small adjustments to minimize unnecessary fuel consumption. For example, as cars approach a traffic light that has turned red, “there’s no point in me driving as fast as possible to the red light,” she says. By just coasting, “I am not burning gas or electricity in the meantime.” If one car, such as an automated vehicle, slows down at the approach to an intersection, then the conventional, non-automated cars behind it will also be forced to slow down, so the impact of such efficient driving can extend far beyond just the car that is doing it.

That’s the basic idea behind eco-driving, Wu says. But to figure out the impact of such measures, “these are challenging optimization problems” involving many different factors and parameters, “so there is a wave of interest right now in how to solve hard control problems using AI.” 

The new benchmark system that Wu and her collaborators developed based on urban eco-driving, which they call “IntersectionZoo,” is intended to help address part of that need. The benchmark was described in detail in a paper presented at the 2025 International Conference on Learning Representation in Singapore.

Looking at approaches that have been used to address such complex problems, Wu says an important category of methods is multi-agent deep reinforcement learning (DRL), but a lack of adequate standard benchmarks to evaluate the results of such methods has hampered progress in the field.

The new benchmark is intended to address an important issue that Wu and her team identified two years ago, which is that with most existing deep reinforcement learning algorithms, when trained for one specific situation (e.g., one particular intersection), the result does not remain relevant when even small modifications are made, such as adding a bike lane or changing the timing of a traffic light, even when they are allowed to train for the modified scenario.

In fact, Wu points out, this problem of non-generalizability “is not unique to traffic,” she says. “It goes back down all the way to canonical tasks that the community uses to evaluate progress in algorithm design.” But because most such canonical tasks do not involve making modifications, “it’s hard to know if your algorithm is making progress on this kind of robustness issue, if we don’t evaluate for that.”

While there are many benchmarks that are currently used to evaluate algorithmic progress in DRL, she says, “this eco-driving problem features a rich set of characteristics that are important in solving real-world problems, especially from the generalizability point of view, and that no other benchmark satisfies.” This is why the 1 million data-driven traffic scenarios in IntersectionZoo uniquely position it to advance the progress in DRL generalizability.  As a result, “this benchmark adds to the richness of ways to evaluate deep RL algorithms and progress.”

And as for the initial question about city traffic, one focus of ongoing work will be applying this newly developed benchmarking tool to address the particular case of how much impact on emissions would come from implementing eco-driving in automated vehicles in a city, depending on what percentage of such vehicles are actually deployed.

But Wu adds that “rather than making something that can deploy eco-driving at a city scale, the main goal of this study is to support the development of general-purpose deep reinforcement learning algorithms, that can be applied to this application, but also to all these other applications — autonomous driving, video games, security problems, robotics problems, warehousing, classical control problems.”

Wu adds that “the project’s goal is to provide this as a tool for researchers, that’s openly available.” IntersectionZoo, and the documentation on how to use it, are freely available at GitHub.

Wu is joined on the paper by lead authors Vindula Jayawardana, a graduate student in MIT’s Department of Electrical Engineering and Computer Science (EECS); Baptiste Freydt, a graduate student from ETH Zurich; and co-authors Ao Qu, a graduate student in transportation; Cameron Hickert, an IDSS graduate student; and Zhongxia Yan PhD ’24. 


New molecular label could lead to simpler, faster tuberculosis tests

MIT chemists found a way to identify a complex sugar molecule in the cell walls of Mycobacterium tuberculosis, the world’s deadliest pathogen.


Tuberculosis, the world’s deadliest infectious disease, is estimated to infect around 10 million people each year, and kills more than 1 million annually. Once established in the lungs, the bacteria’s thick cell wall helps it to fight off the host immune system.

Much of that cell wall is made from complex sugar molecules known as glycans, but it’s not well-understood how those glycans help to defend the bacteria. One reason for that is that there hasn’t been an easy way to label them inside cells.

MIT chemists have now overcome that obstacle, demonstrating that they can label a glycan called ManLAM using an organic molecule that reacts with specific sulfur-containing sugars. These sugars are found in only three bacterial species, the most notorious and prevalent of which is Mycobacterium tuberculosis, the microbe that causes TB.

After labeling the glycan, the researchers were able to visualize where it is located within the bacterial cell wall, and to study what happens to it throughout the first few days of tuberculosis infection of host immune cells.

The researchers now hope to use this approach to develop a diagnostic that could detect TB-associated glycans, either in culture or in a urine sample, which could offer a cheaper and faster alternative to existing diagnostics. Chest X-rays and molecular diagnostics are very accurate but are not always available in developing nations where TB rates are high. In those countries, TB is often diagnosed by culturing microbes from a sputum sample, but that test has a high false negative rate, and it can be difficult for some patients, especially children, to provide a sputum sample. This test also requires many weeks for the bacteria to grow, delaying diagnosis.

“There aren’t a lot of good diagnostic options, and there are some patient populations, including children, who have a hard time giving samples that can be analyzed. There’s a lot of impetus to develop very simple, fast tests,” says Laura Kiessling, the Novartis Professor of Chemistry at MIT and the senior author of the study.

MIT graduate student Stephanie Smelyansky is the lead author of the paper, which appears this week in the Proceedings of the National Academy of Sciences. Other authors include Chi-Wang Ma, an MIT postdoc; Victoria Marando PhD ’23; Gregory Babunovic, a postdoc at the Harvard T.H. Chan School of Public Health; So Young Lee, an MIT graduate student; and Bryan Bryson, an associate professor of biological engineering at MIT.

Labeling glycans

Glycans are found on the surfaces of most cells, where they perform critical functions such as mediating communication between cells.In bacteria, glycans help the microbes to enter host cells, and they also appear to communicate with the host immune system, in some cases blocking the immune response.

Mycobacterium tuberculosis has a really elaborate cell envelope compared to other bacteria, and it’s a rich structure that’s composed of a lot of different glycans,” Smelyansky says. “Something that’s often underappreciated is the fact that these glycans can also interact with our host cells. When our immune cells recognize these glycans, instead of sending out a danger signal, it can send the opposite message, that there’s no danger.”

Glycans are notoriously difficult to tag with any kind of probe, because unlike proteins or DNA, they don’t have distinctive sequences or chemical reactivities that can be targeted. And unlike proteins, they are not genetically encoded, so cells can’t be genetically engineered to produce sugars labeled with fluorescent tags such as green fluorescent protein.

One of the key glycans in M. tuberculosis, known as ManLAM, contains a rare sugar known as MTX, which is unusual in that it has a thioether — a sulfur atom sandwiched between two carbon atoms. This chemical group presented an opportunity to use a small-molecule tag that had been previously developed for labeling methionine, an  amino acid that contains a similar group.

The researchers showed that they could use this tag, known as an oxaziridine, to label ManLAM in M. tuberculosis. The researchers linked the oxaziridine to a fluorescent probe and showed that in M. tuberculosis, this tag showed up in the outer layer of the cell wall. When the researchers exposed the label to Mycobacterium smegmatis, a related bacterium that does not cause disease and does not have the sugar MTX, they saw no fluorescent signal.

“This is the first approach that really selectively allows us to visualize one glycan in particular,” Smelyansky says.

Better diagnostics

The researchers also showed that after labeling ManLAM in M. tuberculosis cells, they could track the cells as they infected immune cells called macrophages. Some tuberculosis researchers had hypothesized that the bacterial cells shed ManLAM once inside a host cell, and that those free glycans then interact with the host immune system. However, the MIT team found that the glycan appears to remain in the bacterial cell walls for at least the first few days of infection.

“The bacteria still have their cell walls attached to them. So it may be that some glycan is being released, but the majority of it is retained on the bacterial cell surface, which has never been shown before,” Smelyansky says.

The researchers now plan to use this approach to study what happens to the bacteria following treatment with different antibiotics, or immune stimulation of the macrophages. It could also be used to study in more detail how the bacterial cell wall is assembled, and how ManLAM helps bacteria get into macrophages and other cells.

“Having a handle to follow the bacteria is really valuable, and it will allow you to visualize processes, both in cells and in animal models, that were previously invisible,” Kiessling says.

She also hopes to use this approach to create new diagnostics for tuberculosis. There is currently a diagnostic in development that uses antibodies to detect ManLAM in a urine sample. However, this test only works well in patients with very active cases of TB, especially people who are immunosuppressed because of HIV or other conditions.

Using their small-molecule sensor instead of antibodies, the MIT team hopes to develop a more sensitive test that could detect ManLAM in the urine even when only small quantities are present.

“This is a beautifully elegant approach to selectively label the surface of mycobacteria, enabling real-time monitoring of cell wall dynamics in this important bacterial family. Such investigations will inform the development of novel strategies to diagnose, prevent, and treat mycobacterial disease, most notably tuberculosis, which remains a global health challenge,” says Todd Lowary, a distinguished research fellow at the Institute of Biological Chemistry, Academia Sinica, Taipei Taiwan, who was not involved in the research.

The research was funded by the National Institute of Allergy and Infectious Disease, the National Institutes of Health, the National Science Foundation, and the Croucher Fellowship.


MIT physicists snap the first images of “free-range” atoms

The results will help scientists visualize never-before-seen quantum phenomena in real space.


MIT physicists have captured the first images of individual atoms freely interacting in space. The pictures reveal correlations among the “free-range” particles that until now were predicted but never directly observed. Their findings, appearing today in the journal Physical Review Letters, will help scientists visualize never-before-seen quantum phenomena in real space.

The images were taken using a technique developed by the team that first allows a cloud of atoms to move and interact freely. The researchers then turn on a lattice of light that briefly freezes the atoms in their tracks, and apply finely tuned lasers to quickly illuminate the suspended atoms, creating a picture of their positions before the atoms naturally dissipate.

The physicists applied the technique to visualize clouds of different types of atoms, and snapped a number of imaging firsts. The researchers directly observed atoms known as “bosons,” which bunched up in a quantum phenomenon to form a wave. They also captured atoms known as “fermions” in the act of pairing up in free space — a key mechanism that enables superconductivity.

“We are able to see single atoms in these interesting clouds of atoms and what they are doing in relation to each other, which is beautiful,” says Martin Zwierlein, the Thomas A. Frank Professor of Physics at MIT.

In the same journal issue, two other groups report using similar imaging techniques, including a team led by Nobel laureate Wolfgang Ketterle, the John D. MacArthur Professor of Physics at MIT. Ketterle’s group visualized enhanced pair correlations among bosons, while the other group, from École Normale Supérieure in Paris, led by Tarik Yefsah, a former postdoc in Zwierlein’s lab, imaged a cloud of noninteracting fermions.

The study by Zwierlein and his colleagues is co-authored by MIT graduate students Ruixiao Yao, Sungjae Chi, and Mingxuan Wang, and MIT assistant professor of physics Richard Fletcher.

Inside the cloud

A single atom is about one-tenth of a nanometer in diameter, which is one-millionth of the thickness of a strand of human hair. Unlike hair, atoms behave and interact according to the rules of quantum mechanics; it is their quantum nature that makes atoms difficult to understand. For example, we cannot simultaneously know precisely where an atom is and how fast it is moving.

Scientists can apply various methods to image individual atoms, including absorption imaging, where laser light shines onto the atom cloud and casts its shadow onto a camera screen.

“These techniques allow you to see the overall shape and structure of a cloud of atoms, but not the individual atoms themselves,” Zwierlein notes. “It’s like seeing a cloud in the sky, but not the individual water molecules that make up the cloud.”

He and his colleagues took a very different approach in order to directly image atoms interacting in free space. Their technique, called “atom-resolved microscopy,” involves first corralling a cloud of atoms in a loose trap formed by a laser beam. This trap contains the atoms in one place where they can freely interact. The researchers then flash on a lattice of light, which freezes the atoms in their positions. Then, a second laser illuminates the suspended atoms, whose fluorescence reveals their individual positions.

“The hardest part was to gather the light from the atoms without boiling them out of the optical lattice,” Zwierlein says. “You can imagine if you took a flamethrower to these atoms, they would not like that. So, we’ve learned some tricks through the years on how to do this. And it’s the first time we do it in-situ, where we can suddenly freeze the motion of the atoms when they’re strongly interacting, and see them, one after the other. That’s what makes this technique more powerful than what was done before.”

Bunches and pairs

The team applied the imaging technique to directly observe interactions among both bosons and fermions. Photons are an example of a boson, while electrons are a type of fermion. Atoms can be bosons or fermions, depending on their total spin, which is determined by whether the total number of their protons, neutrons, and electrons is even or odd. In general, bosons attract, whereas fermions repel.

Zwierlein and his colleagues first imaged a cloud of bosons made up of sodium atoms. At low temperatures, a cloud of bosons forms what’s known as a Bose-Einstein condensate — a state of matter where all bosons share one and the same quantum state. MIT’s Ketterle was one of the first to produce a Bose-Einstein condensate, of sodium atoms, for which he shared the 2001 Nobel Prize in Physics.

Zwierlein’s group now is able to image the individual sodium atoms within the cloud, to observe their quantum interactions. It has long been predicted that bosons should “bunch” together, having an increased probability to be near each other. This bunching is a direct consequence of their ability to share one and the same quantum mechanical wave. This wave-like character was first predicted by physicist Louis de Broglie. It is the “de Broglie wave” hypothesis that in part sparked the beginning of modern quantum mechanics.

“We understand so much more about the world from this wave-like nature,” Zwierlein says. “But it’s really tough to observe these quantum, wave-like effects. However, in our new microscope, we can visualize this wave directly.”

In their imaging experiments, the MIT team were able to see, for the first time in situ, bosons bunch together as they shared one quantum, correlated de Broglie wave. The team also imaged a cloud of two types of lithium atoms. Each type of atom is a fermion, that naturally repels its own kind, but that can strongly interact with other particular fermion types. As they imaged the cloud, the researchers observed that indeed, the opposite fermion types did interact, and formed fermion pairs — a coupling that they could directly see for the first time.

“This kind of pairing is the basis of a mathematical construction people came up with to explain experiments. But when you see pictures like these, it’s showing in a photograph, an object that was discovered in the mathematical world,” says study co-author Richard Fletcher. “So it’s a very nice reminder that physics is about physical things. It’s real.”

Going forward, the team will apply their imaging technique to visualize more exotic and less understood phenomena, such as “quantum Hall physics” — situations when interacting electrons display novel correlated behaviors in the presence of a magnetic field.

“That’s where theory gets really hairy — where people start drawing pictures instead of being able to write down a full-fledged theory because they can’t fully solve it,” Zwierlein says. “Now we can verify whether these cartoons of quantum Hall states are actually real. Because they are pretty bizarre states.”

This work was supported, in part, by National Science Foundation through the MIT-Harvard Center for Ultracold Atoms, as well as by the Air Force Office of Scientific Research, the Army Research Office, the Department of Energy, the Defense Advanced Projects Research Agency, a Vannevar Bush Faculty Fellowship, and the David and Lucile Packard Foundation.


The age-old problem of long-term care

Informal help is a huge share of elder care in U.S., a burden that is only set to expand. A new book explores different countries’ solutions.


Caring well for the elderly is a familiar challenge. Some elderly people need close medical attention in facilities; others struggle with reduced capabilities while not wanting to leave their homes. For families, finding good care is hard and expensive, and already-burdened family members often pick up the slack.

The problem is expanding as birthrates drop while some segments of the population live longer, meaning that a growing portion of the population is elderly. In the U.S., there are currently three states currently where at least 20 percent of the population is 65 and older. (Yes, Florida is one.) But by 2050, demographic trends suggest, there will be 43 states with that profile.

In age terms, “America is becoming Florida,” quips MIT economist Jonathan Gruber. “And it’s not just America. The whole world is aging rapidly. The share of the population over 65 is growing rapidly everywhere, and within that, the share of the elderly that are over 85 is growing rapidly.”

In a new edited volume, Gruber and several other scholars explore the subject from a global perspective. The book, “Long-Term Care around the World,” is published this month by the University of Chicago Press. The co-editors are Gruber, the Ford Professor of Economics and chair of the Department of Economics at MIT; and Kathleen McGarry, a professor of economics at Stony Brook University.

The book looks at 10 relatively wealthy countries and how they approach the problem of long-term care. In their chapter about the U.S., Gruber and McGarry emphasize a remarkable fact: About one-third of long-term care for the elderly in the U.S. is informal, provided by family and friends, despite limited time and resources. Overall, long-term care is 2 percent of U.S. GDP.

“We have two fundamental long-term care problems in the U.S.,” Gruber says. “Too much informal care at home, and, relatedly, not enough options for elders to live with effective care in ‘congregate housing’ [or elder communities], even if they’re not sick enough for a nursing facility.”

The nature of the problem

The needs of the elderly sit in plain sight. In the U.S., about 30 percent of people 65 and over, and 60 percent of people 85 and over report limitations in basic activities. Getting dressed and taking baths are among the most common daily problems; shopping for groceries and managing money are also widely reported issues. Additionally, these limitations have mental health implications. About 10 percent of the elderly report depression, rising to 30 percent among those who struggle with three or more types of basic daily tasks.

Even so, the U.S. is not actually heavily dotted with nursing homes. In a country of about 330 million people, with 62 million being 65 and over, it’s unusual for an elderly person to be in one.

“We all think of nursing homes as where you go when you’re old, but there are only about 1.2 million people in nursing homes in America,” Gruber observes. “Which is a lot, but tiny compared to the share of people who are elderly in the U.S. and who have needs. Most people who have needs get them met at home.”

And while nursing homes can be costly, home care is too. Given an average U.S. salary of $23 per hour for a home health care aide, annual costs can reach six figures even with half-time care. As a result, many families simply help their elderly relatives as best they can.

Therefore, Gruber has found, we must account for the informal costs of elder care, too. Ultimately, Gruber says, informal help represents “an inefficient system of people taking care of their elderly parents at home, which is a stress on the family, and the elders don’t get enough care.”

To be sure, some people buy private long-term care insurance to defray these costs. But this is a tricky market, where insurers are concerned about “adverse selection,” people buying policies with a distinct need for them (beyond what insurers can detect). Rates therefore can seem high, and for limited, conditional benefits. Research by MIT economist Amy Finkelstein has shown that only 18 percent of long-term insurance policies are used.

“Private long-term care insurance is a market that just hasn’t worked well,” Gruber says. “It’s basically a fixed amount of money, should you meet certain conditions. And people are surprised by that, and it doesn’t meet their needs, and it’s expensive. We need a public solution.”

Congregate housing, a possible solution

Looking at long-term care internationally helps identify what those solutions might be. The U.S. does not neglect elder care, but could clearly broaden its affordable options.

“On the one hand, what jumped out at me is how normal the U.S. is,” Gruber says. “We’re in the middle of the pack in terms of the share of GDP we spend on long-term care.” However, some European countries that spend a similar share and also rely heavily on informal elder care, including Italy and Spain, have notably lower levels of GDP per capita.

Some other European countries with income levels closer to the U.S., including Germany and the Netherlands, do spend more on long-term elder care. The Netherlands tops the list by devoting about 4 percent of its GDP to this area.

However, in the U.S., the issue is not so much drastically changing how much it spends on long-term elder care, but how it spends. The Dutch have a relatively more extensive system of elder communities — the “congregate housing” for the elderly who are not desperately unwell, but simply find self-reliance increasingly hard.

“That’s the huge missing hole in the U.S. long-term care system, what do we do with people who aren’t sick enough for a nursing home, but probably shouldn’t be at home,” Gruber says. “Right now they stay at home, they’re lonely, they’re not getting services, their kids are super-stressed out, and they’re pulling millions of people out of the labor force, especially women. Everyone is unhappy about it, and they’re not growing GDP, so it’s hurting our economy and our well-being.”

Overall, then, Gruber thinks further investment in elder-care communities would be an example of effective government spending that can address the brewing crisis in long-term care — although it would require new federal legislation in a highly polarized political environment.

Could that happen? Could the U.S. invest more now and realize long-term financial benefits, while allowing working-age employees to spend more time at their jobs rather than acting as home caregivers? Making people more aware of the issue, Gruber thinks, is a necessary starting point.

“If anything might be bipartisan, it could be long-term care,” Gruber says. “Everybody has parents. A solution has to be bipartisan. Long-term care may be one of those areas where it’s possible.”

Support for the research was provided, in part, by the National Institute on Aging.


Radar and communications system extends signal range at millimeter-wave frequencies

The system will support US Army missions.


A team from MIT Lincoln Laboratory has built and demonstrated the wideband selective propagation radar (WiSPR), a system capable of seeing out various distances at millimeter-wave (mmWave or MMW) frequencies. Typically, these high frequencies, which range from 30 to 300 gigahertz (GHz), are employed for only short-range operations. Using transmit-and-receive electronically scanned arrays of many antenna elements each, WiSPR produces narrow beams capable of quickly scanning around an area to detect objects of interest. The narrow beams can also be manipulated into broader beams for communications.

"Building a system with sufficient sensitivity to operate over long distances at these frequencies for radar and communications functions is challenging," says Greg Lyons, a senior staff member in the Airborne Radar Systems and Techniques Group, part of Lincoln Laboratory's ISR Systems and Technology R&D area. "We have many radar experts in our group, and we all debated whether such a system was even feasible. Much innovation is happening in the commercial sector, and we leveraged those advances to develop this multifunctional system."

The high signal bandwidth available at mmWave makes these frequencies appealing. Available licensed frequencies are quickly becoming overloaded, and harnessing mmWave frequencies frees up considerable bandwidth and reduces interference between systems. A high signal bandwidth is useful in a communications system to transmit more information, and in a radar system to improve range resolution (i.e., ability of radar to distinguish between objects in the same angular direction but at different distances from the radar).

The phases for success

In 2019, the laboratory team set out to assess the feasibility of their mmWave radar concept. Using commercial off-the-shelf radio-frequency integrated circuits (RFICs), which are chips that send and receive radio waves, they built a fixed-beam system (only capable of staring in one direction, not scanning) with horn antennas. During a demonstration on a foggy day at Joint Base Cape Cod, the proof-of-concept system successfully detected calibration objects at unprecedented ranges.  

"How do you build a prototype for what will eventually be a very complicated system?" asks program manager Christopher Serino, an assistant leader of the Airborne Radar Systems and Techniques Group. "From this feasibility testing, we showed that such a system could actually work, and identified the technology challenges. We knew those challenges would require innovative solutions, so that's where we focused our initial efforts."

WiSPR is based on multiple-element antenna arrays. Whether serving a radar or communications function, the arrays are phased, which means the phase between each antenna element is adjusted. This adjustment ensures all phases add together to steer the narrow beams in the desired direction. With this configuration of multiple elements phased up, the antenna becomes more directive in sending and receiving energy toward one location. (Such phased arrays are becoming ubiquitous in technologies like 5G smartphones, base stations, and satellites.)

To enable the tiny beams to continuously scan for objects, the team custom-built RFICs using state-of-the-art semiconductor technology and added digital capabilities to the chips. By controlling the behavior of these chips with custom firmware and software, the system can search for an object and, after the object is found, keep it in "track" while the search for additional objects continues — all without physically moving antennas or relying on an operator to tell the system what to do next.

"Phasing up elements in an array to get gain in a particular direction is standard practice," explains Deputy Program Manager David Conway, a senior staff member in the Integrated RF and Photonics Group. "What isn't standard is having this many elements with the RF at millimeter wavelengths still working together, still summing up their energy in transmit and receive, and capable of quickly scanning over very wide angles."

Line 'em up and cool 'em down

For the communications function, the team devised a novel beam alignment procedure.

"To be able to combine many antenna elements to have a radar reach out beyond typical MMW operating ranges — that's new," Serino says. "To be able to electronically scan the beams around as a radar with effectively zero latency between beams at these frequencies — that's new. Broadening some of those beams so you're not constantly reacquiring and repointing during communications — that's also new."

Another innovation key to WiSPR's development is a cooling arrangement that removes the large amount of heat dissipated in a small area behind the transmit elements, each having their own power amplifier.

Last year, the team demonstrated their prototype WiSPR system at the U.S. Army Aberdeen Proving Ground in Maryland, in collaboration with the U.S. Army Rapid Capabilities and Critical Technologies Office and the U.S. Army Test and Evaluation Command. WiSPR technology has since been transitioned to a vendor for production. By adopting WiSPR, Army units will be able to conduct their missions more effectively.

"We're anticipating that this system will be used in the not-too-distant future," Lyons says. "Our work has pushed the state of the art in MMW radars and communication systems for both military and commercial applications."

"This is exactly the kind of work Lincoln Laboratory is proud of: keeping an eye on the commercial sector and leveraging billions-of-dollars investments to build new technology, rather than starting from scratch," says Lincoln Laboratory assistant director Marc Viera.

This effort supported the U.S. Army Rapid Capabilities and Critical Technologies Office. The team consists of additional members from the laboratory's Airborne Radar Systems and Techniques, Integrated RF and Photonics, Mechanical Engineering, Advanced Capabilities and Systems, Homeland Protection Systems, and Transportation Safety and Resilience groups.


Novel AI model inspired by neural dynamics from the brain

New type of “state-space model” leverages principles of harmonic oscillators.


Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a novel artificial intelligence model inspired by neural oscillations in the brain, with the goal of significantly advancing how machine learning algorithms handle long sequences of data.

AI often struggles with analyzing complex information that unfolds over long periods of time, such as climate trends, biological signals, or financial data. One new type of AI model, called "state-space models," has been designed specifically to understand these sequential patterns more effectively. However, existing state-space models often face challenges — they can become unstable or require a significant amount of computational resources when processing long data sequences.

To address these issues, CSAIL researchers T. Konstantin Rusch and Daniela Rus have developed what they call “linear oscillatory state-space models” (LinOSS), which leverage principles of forced harmonic oscillators — a concept deeply rooted in physics and observed in biological neural networks. This approach provides stable, expressive, and computationally efficient predictions without overly restrictive conditions on the model parameters.

"Our goal was to capture the stability and efficiency seen in biological neural systems and translate these principles into a machine learning framework," explains Rusch. "With LinOSS, we can now reliably learn long-range interactions, even in sequences spanning hundreds of thousands of data points or more."

The LinOSS model is unique in ensuring stable prediction by requiring far less restrictive design choices than previous methods. Moreover, the researchers rigorously proved the model’s universal approximation capability, meaning it can approximate any continuous, causal function relating input and output sequences.

Empirical testing demonstrated that LinOSS consistently outperformed existing state-of-the-art models across various demanding sequence classification and forecasting tasks. Notably, LinOSS outperformed the widely-used Mamba model by nearly two times in tasks involving sequences of extreme length.

Recognized for its significance, the research was selected for an oral presentation at ICLR 2025 — an honor awarded to only the top 1 percent of submissions. The MIT researchers anticipate that the LinOSS model could significantly impact any fields that would benefit from accurate and efficient long-horizon forecasting and classification, including health-care analytics, climate science, autonomous driving, and financial forecasting.

"This work exemplifies how mathematical rigor can lead to performance breakthroughs and broad applications," Rus says. "With LinOSS, we’re providing the scientific community with a powerful tool for understanding and predicting complex systems, bridging the gap between biological inspiration and computational innovation."

The team imagines that the emergence of a new paradigm like LinOSS will be of interest to machine learning practitioners to build upon. Looking ahead, the researchers plan to apply their model to an even wider range of different data modalities. Moreover, they suggest that LinOSS could provide valuable insights into neuroscience, potentially deepening our understanding of the brain itself.

Their work was supported by the Swiss National Science Foundation, the Schmidt AI2050 program, and the U.S. Department of the Air Force Artificial Intelligence Accelerator.


TeleAbsence: Poetic encounters with the past

MIT researchers lay out design principles behind the TeleAbsence vision, how it could help people cope with loss and plan for how they might be remembered.


In the dim light of the lab, friends, family, and strangers watched the image of a pianist playing for them, the pianist’s fingers projected onto the moving keys of a real grand piano that filled the space with music.

Watching the ghostly musicians, faces and bodies blurred at their edges, several listeners shared one strong but strange conviction: “feeling someone’s presence” while “also knowing that I am the only one in the room.”

“It’s tough to explain,” another listener said. “It felt like they were in the room with me, but at the same time, not.”

That presence of absence is at the heart of TeleAbsence, a project by the MIT Media Lab’s Tangible Media group that focuses on technologies that create illusory communication with the dead and with past selves.

But rather than a “Black Mirror”-type scenario of synthesizing literal loved ones, the project led by Hiroshi Ishii, the Jerome B. Wiesner Professor of Media Arts and Sciences, instead seeks what it calls “poetic encounters” that reach across time and memory.

The project recently published a positioning paper in PRESENCE: Virtual and Augmented Reality that presents the design principles behind TeleAbsence, and how it could help people cope with loss and plan for how they might be remembered.

The phantom pianists of the MirrorFugue project, created by Tangible Media graduate Xiao Xiao ’09, SM ’11, PhD ’16, are one of the best-known examples of the project. On April 30, Xiao, now director and principal investigator at the Institute for Future Technologies of Da Vinci Higher Education in Paris, shared results from the first experimental study of TeleAbsence through MirrorFugue at the 2025 CHI conference on Human Factors in Computing Systems in Yokohama, Japan.

When Ishii spoke about TeleAbsence at the XPANSE 2024 conference in Abu Dhabi, “about 20 people came up to me after, and all of them told me they had tears in their eyes … the talk reminded them about a wife or a father who passed away,” he says. “One thing is clear: They want to see them again and talk to them again, metaphorically.”

Messages in bottles

As the director of the Tangible Media group, Ishii has been a world leader in telepresence, using technologies to connect people over physical distance. But when his mother died in 1998, Ishii says the pain of the loss prompted him to think about how much we long to connect across the distance of time.

His mother wrote poetry, and one of his first experiments in TeleAbsence was the creation of a Twitterbot that would post snippets of her poetry. Others watching the account online were so moved that they began posting photos of flowers to the feed to honor the mother and son.

“That was a turning point for TeleAbsence, and I wanted to expand this concept,” Ishii says.

Illusory communication, like the posted poems, is one key design principle of TeleAbsence. Even though users know the “conversation” is one-way, the researchers write, it can be comforting and cathartic to have a tangible way to reach out across time.

Finding ways to make memories material is another important design principle. One of the projects created by Ishii and colleagues is a series of glass bottles, reminiscent of the soy sauce bottles Ishii’s mother used while cooking. Open one of the bottles, and the sounds of chopping, of sizzling onions, of a radio playing quietly in the background, of a maternal voice, reunite a son with his mother.

Ishii says sight and sound are the primary modalities of TeleAbsence technologies for now, because although the senses of touch, smell, and taste are known to be powerful memory triggers, “it is a very big challenge to record that kind of multimodal moment.”

At the same time, one of the other pillars of TeleAbsence is the presence of absence. These are the physical markers, or traces, of a person that serve to remind us both of the person and that the person is gone. One of the most powerful examples, the researchers write, is the permanent “shadow” of Hiroshima Japanese resident Mitsuno Ochi, her silhouette transferred to stone steps 260 meters from where the atomic bomb detonated in 1945.

“Abstraction is very important,” Ishii says. “We want something to recall a moment, not physically recreate it.”

With the bottles, for instance, people have asked Ishii and his colleagues whether it might be more evocative to fill them with a perfume or drink. “But our philosophy is to make a bottle completely empty,” he explains. “The most important thing to let people imagine, based on the memory.”

Other important design principles within TeleAbsence include traces of reflection — the ephemera of faint pen scratches and blotted ink on a preserved letter, for instance — and the concept of remote time. TeleAbsence should go beyond dredging up a memory of a loved one, the researchers insist, and should instead produce a sense of being transported to spend a moment in the past with them.

Time travelers

For Xiao, who has played the piano her whole life, MirrorFugue is a “deeply personal project” that allowed her to travel to a time in her childhood that was almost lost to her.

Her parents moved from China to the United States when she was a baby — but it took eight years for Xiao to follow. “The piano, in a sense, was almost like my first language,” she recalls. “And then when I moved to America, my brain overwrote bits of my childhood where my operating system used to be in Chinese, and now it’s very much in English. But throughout this whole time, music and the piano stayed constant.”

MirrorFugue’s “sense of kind-of being there and not being there, and the wish to connect with oneself from the past, comes from my own desire to connect with my own past self,” she adds.

The new MirrorFugue study puts some empirical data behind the concept of TeleAbsence, she says. Its 28 participants were fitted with sensors to measure changes in their heart rate and hand movements during the experience. They were extensively interviewed about their perceptions and emotions afterward. The recorded images came from pianists ranging in experience from children early in their lessons to professional pianists like the late Ryuichi Sakamoto.

The researchers found that emotional experiences described by the listeners were significantly influenced by whether the listeners knew the pianist, as well as whether the pianist was known by the listeners to be alive or dead.

Some participants placed their own hands alongside the ghosts to play impromptu duets. One daughter, who said she had not paid close attention to her father’s playing when he was alive, was newly impressed by his talent. One person felt empathy watching his past self struggle through a new piece of music. A young girl, mouth slightly open in concentration and fingers small on the keys, showed her mother a past daughter that wasn’t possible to see in old photos.

The longing for past people and past selves can be “a deep sadness that will never go away,” says Xiao. “You’ll always carry it with you, but it also makes you sensitive to certain aesthetic experiences that’s also beautiful.”

“Once you’ve had that experience, it really resonates,” she adds, “And I think that’s why TeleAbsence resonates with so many people.”

Uncanny valleys and curated memory

Acutely aware of the potential ethical dangers of their research, the TeleAbsence scientists have worked with grief researchers and psychologists to better understand the implications of building these bridges through time.

For instance, “one thing we learned is that it depends on how long ago a person passed away,” says Ishii. “Right after death, when it’s very difficult for many people, this representation matters. But you have to make important informed decisions about whether this drags out the grief too long.”

TeleAbsence could comfort the dying, he says, by “knowing there is a means by which they are going to live on for their descendants.” He encourages people to consider curating “high-quality, condensed information,” such as their social media posts, that could be used for this purpose.

“But of course many families do not have ideal relationships, so I can easily think of the case where a descendant might not have any interest” in interacting with their ancestors through TeleAbsence, Ishii notes.

TeleAbsence should never fully recreate or generate new content for a loved one, he insists, pointing to the rise of “ghost bot” startups, companies that collect data on a person to create an “artificial, generative AI-based avatar that speaks what they never spoke, or do gestures or facial expressions.”

A recent viral video of a mother in Korea “reunited” in virtual reality with an avatar of her dead daughter, Ishii says, made him “very depressed, because they’re doing grief as entertainment, consumption for an audience.”

Xiao thinks there might still be some role for generative AI in the TeleAbsence space. She is writing a research proposal for MirrorFugue that would include representations of past pianists. “I think right now we’re getting to the point with generative AI that we can generate hand movements and we can transcribe the MIDI from the audio so that we can conjure up Franz Listz or Mozart or somebody, a really historical figure.”

“Now of course, it gets a little bit tricky, and we have discussed this, the role of AI and how to avoid the uncanny valley, how to avoid deceiving people,” she says. “But from a researcher’s perspective, it actually excites me a lot, the possibility to be able to empirically test these things.”

The importance of emptiness

Along with Ishii’s mother, the PRESENCE paper was also dedicated “in loving memory” to Elise O’Hara, a beloved Media Lab administrative assistant who worked with Tangible Media until her unexpected death in 2023. Her presence — and her absence — are felt deeply every day, says Ishii.

He wonders if TeleAbsence could someday become a common word “to describe something that was there, but is now gone.”

“When there is a place on a bookshelf where a book should be,” he says, “my students say, ‘oh, that’s a teleabsence.’”

Like a sudden silence in the middle of a song, or the empty white space of a painting, emptiness can hold important meaning. It’s an idea that we should make more room for in our lives, Ishii says.

“Because now we’re so busy, so many notification messages from your smartphone, and we are all distracted, always,” he suggests. “So emptiness and impermanence, presence of absence, if those concepts can be accepted, then people can think a bit more poetically.”


Study of facial bacteria could lead to probiotics that promote healthy skin

During the early teen years, many new strains of C. acnes colonize the skin on our faces. This could be an optimal time for probiotic treatment.


The composition of bacterial populations living on our faces plays a significant role in the development of acne and other skin conditions such as eczema. Two species of bacteria predominate in most people, but how they interact with each other, and how those interactions may contribute to disease, has been difficult to study.

MIT researchers have now revealed the dynamics of those interactions in more detail than previously possible, shedding light on when and how new bacterial strains emerge on the skin of the face. Their findings could help guide the development of new treatments for acne and other conditions, and may also help to optimize the timing of such treatments.

The researchers found that many new strains of Cutibacterium acnes, a species believed to contribute to the development of acne, are acquired during the early teenage years. But after that, the makeup of these populations becomes very stable and doesn’t change much even when exposed to new strains.

That suggests that this transitional stage could be the best window for introducing probiotic strains of C. acnes, says Tami Lieberman, an associate professor of civil and environmental engineering, a member of MIT’s Institute for Medical Engineering and Science, and the senior author of the study.

“We found that there are some surprising dynamics, and these dynamics provide insights for how to design probiotic therapy,” Lieberman says. “If we had a strain that we knew could prevent acne, these results would suggest we should make sure we apply them early during the transition to adulthood, to really get them to engraft.”

Jacob Baker PhD ’24, who is now the chief scientific officer at Taxa Technologies, is the lead author of the paper, which appears today in Cell Host and Microbe. Other authors include MIT graduate student Evan Qu, MIT postdoc Christopher Mancuso, Harvard University graduate student A. Delphine Tripp, and former MIT postdoc Arolyn Conwill PhD ’18.

Microbial dynamics

Although C. acnes has been implicated in the development of acne, it is still unclear exactly why acne develops in some people but not others — it may be that some strains are more likely to cause skin inflammation, or there may be differences in how the host immune system responds to the bacteria, Lieberman says. There are probiotic strains of C. acnes now available, which are thought to help prevent acne, but the benefits of these strains have not been proven.

Along with C. acnes, the other predominant bacterium found on the face is Staphylococcus epidermidis. Together, these two strains make up about 80 percent of the strains in the adult facial skin microbiome. Both of these species exist in different strains, or lineages, that vary by a small number of genetic mutations. However, until now, researchers had not been able to accurately measure this diversity or track how it changes over time.

Learning more about those dynamics could help researchers answer key questions that could help them develop new probiotic treatments for acne: How easy is it for new lineages to establish themselves on the skin, and what is the best time to introduce them?

To study these population shifts, the researchers had to measure how individual cells evolve over time. To do that, they began by obtaining microbiome samples from 30 children at a Boston-area school and from 27 of their parents. Studying members of the same family enabled the researchers to analyze the likelihood of different lineages being transferred between people in close contact.

For about half of the individuals, the researchers were able to take samples at multiple time points, and for the rest, only once. For each sample, they isolated individual cells and grew them into colonies, then sequenced their genomes.

This allowed the researchers to learn how many lineages were found on each person, how they changed over time, and how different cells from the same lineage were. From that information, the researchers could infer what had happened to those lineages in the recent past and how long they had been present on the individual.

Overall, the researchers identified a total of 89 C. acnes lineages and 78 S. epidermidis lineages, with up to 11 of each found in each person’s microbiome. Previous work had suggested that in each person’s facial skin microbiome, lineages of these two skin bacteria remain stable over long periods of time, but the MIT team found that these populations are actually more dynamic than previously thought.

“We wanted to know if these communities were truly stable, and if there could be times where they weren’t stable. In particular, if the transition to an adult skin like microbiome would have a higher rate of acquisition of new lineages,” Lieberman says.

During the early teens, an increase in hormone production results in increased oil on the skin, which is a good food source for bacteria. It has previously been shown that during this time, the density of bacteria on the skin of the face increases by about 10,000-fold. In this study, the researchers found that while the composition of C. acnes populations tended to remain very stable over time, the early teenage years present an opportunity for many more lineages of C. acnes to appear.

“For C. acnes, what we were able to show was that people do get strains throughout life, but very rarely,” Lieberman says. “We see the highest rate of influx when teenagers are transitioning to a more adult-like skin microbiome.”

The findings suggest that for topical probiotic treatments for acne, the best time to apply them is during the early teenage years, when there could be more opportunity for probiotic strains to become established.

Population turnover

Later in adulthood, there is a little bit of sharing of C. acnes strains between parents living in the same household, but the rate of turnover in any individual person’s microbiome is still very low, Lieberman says.

The researchers found that S. epidermidis has a much higher turnover rate than C. acnes — each S. epidermidis strain lives on the face for an average of less than two years. However, there was not very much overlap in the S. epidermidis lineages shared by members of the same household, suggesting that transfer of strains between people is not causing the high turnover rate.

“That suggests that something is preventing homogenization between people,” Lieberman says. “It could be host genetics or host behavior, or people using different topicals or different moisturizers, or it could be active restriction of new migrants from the bacteria that are already there at that moment.”

Now that they’ve shown that new C. acnes strains can be acquired during the early teenage years, the researchers hope to study whether the timing of this acquisition affects how the immune system responds to them. They also hope to learn more about how people maintain such different microbiome populations even when exposed to new lineages through close contact with family members.

“We want to understand why we each have unique strain communities despite the fact that there is this constant accessibility and high turnover, specifically for S. epidermidis,” Lieberman says. “What’s driving this constant turnover in S. epidermidis, and what are the implications of these new colonizations for acne during adolescence?”

The research was funded by the MIT Center for Microbiome Informatics and Therapeutics, a Smith Family Foundation Award for Excellence in Biomedical Research, and the National Institutes of Health.


Making AI models more trustworthy for high-stakes settings

A new method helps convey uncertainty more precisely, which could give researchers and medical clinicians better information to make decisions.


The ambiguity in medical imaging can present major challenges for clinicians who are trying to identify disease. For instance, in a chest X-ray, pleural effusion, an abnormal buildup of fluid in the lungs, can look very much like pulmonary infiltrates, which are accumulations of pus or blood.

An artificial intelligence model could assist the clinician in X-ray analysis by helping to identify subtle details and boosting the efficiency of the diagnosis process. But because so many possible conditions could be present in one image, the clinician would likely want to consider a set of possibilities, rather than only having one AI prediction to evaluate.

One promising way to produce a set of possibilities, called conformal classification, is convenient because it can be readily implemented on top of an existing machine-learning model. However, it can produce sets that are impractically large. 

MIT researchers have now developed a simple and effective improvement that can reduce the size of prediction sets by up to 30 percent while also making predictions more reliable.

Having a smaller prediction set may help a clinician zero in on the right diagnosis more efficiently, which could improve and streamline treatment for patients. This method could be useful across a range of classification tasks — say, for identifying the species of an animal in an image from a wildlife park — as it provides a smaller but more accurate set of options.

“With fewer classes to consider, the sets of predictions are naturally more informative in that you are choosing between fewer options. In a sense, you are not really sacrificing anything in terms of accuracy for something that is more informative,” says Divya Shanmugam PhD ’24, a postdoc at Cornell Tech who conducted this research while she was an MIT graduate student.

Shanmugam is joined on the paper by Helen Lu ’24; Swami Sankaranarayanan, a former MIT postdoc who is now a research scientist at Lilia Biosciences; and senior author John Guttag, the Dugald C. Jackson Professor of Computer Science and Electrical Engineering at MIT and a member of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). The research will be presented at the Conference on Computer Vision and Pattern Recognition in June.

Prediction guarantees

AI assistants deployed for high-stakes tasks, like classifying diseases in medical images, are typically designed to produce a probability score along with each prediction so a user can gauge the model’s confidence. For instance, a model might predict that there is a 20 percent chance an image corresponds to a particular diagnosis, like pleurisy.

But it is difficult to trust a model’s predicted confidence because much prior research has shown that these probabilities can be inaccurate. With conformal classification, the model’s prediction is replaced by a set of the most probable diagnoses along with a guarantee that the correct diagnosis is somewhere in the set.

But the inherent uncertainty in AI predictions often causes the model to output sets that are far too large to be useful.

For instance, if a model is classifying an animal in an image as one of 10,000 potential species, it might output a set of 200 predictions so it can offer a strong guarantee.

“That is quite a few classes for someone to sift through to figure out what the right class is,” Shanmugam says.

The technique can also be unreliable because tiny changes to inputs, like slightly rotating an image, can yield entirely different sets of predictions.

To make conformal classification more useful, the researchers applied a technique developed to improve the accuracy of computer vision models called test-time augmentation (TTA).

TTA creates multiple augmentations of a single image in a dataset, perhaps by cropping the image, flipping it, zooming in, etc. Then it applies a computer vision model to each version of the same image and aggregates its predictions.

“In this way, you get multiple predictions from a single example. Aggregating predictions in this way improves predictions in terms of accuracy and robustness,” Shanmugam explains.

Maximizing accuracy

To apply TTA, the researchers hold out some labeled image data used for the conformal classification process. They learn to aggregate the augmentations on these held-out data, automatically augmenting the images in a way that maximizes the accuracy of the underlying model’s predictions.

Then they run conformal classification on the model’s new, TTA-transformed predictions. The conformal classifier outputs a smaller set of probable predictions for the same confidence guarantee.

“Combining test-time augmentation with conformal prediction is simple to implement, effective in practice, and requires no model retraining,” Shanmugam says.

Compared to prior work in conformal prediction across several standard image classification benchmarks, their TTA-augmented method reduced prediction set sizes across experiments, from 10 to 30 percent.

Importantly, the technique achieves this reduction in prediction set size while maintaining the probability guarantee.

The researchers also found that, even though they are sacrificing some labeled data that would normally be used for the conformal classification procedure, TTA boosts accuracy enough to outweigh the cost of losing those data.

“It raises interesting questions about how we used labeled data after model training. The allocation of labeled data between different post-training steps is an important direction for future work,” Shanmugam says.

In the future, the researchers want to validate the effectiveness of such an approach in the context of models that classify text instead of images. To further improve the work, the researchers are also considering ways to reduce the amount of computation required for TTA.

This research is funded, in part, by the Wistrom Corporation.


Studying work, life, and economics

Economics doctoral student Tishara Garg takes a novel approach to answering ambitious questions about big-push industrial policy and development.


For policymakers investigating the effective transition of an economy from agriculture to manufacturing and services, there are complex economic, institutional, and practical considerations. “Are certain regions trapped in an under-industrialization state?” asks Tishara Garg, an economics doctoral student at MIT. “If so, can government policy help them escape this trap and transition to an economy characterized by higher levels of industrialization and better-paying jobs?” 

Garg’s research focuses on trade, economic geography, and development. Her studies yielded the paper “Can Industrial Policy Overcome Coordination Failures: Theory and Evidence from Industrial Zones,” which investigates whether economic policy can shift an economy from an undesirable state to a desirable state. 

Garg’s work combines tools from industrial organization and numerical algebraic geometry. Her paper finds that regions in India with state-developed industrial zones are 38 percent more likely to shift from a low to high industrialization state over a 15-year period than those without such zones.  

The kinds of questions uncovered during her studies aren’t easily answered using standard technical and econometric tools, so she’s developing new ones. “One of my study’s main contributions is a methodological framework that draws on ideas from different areas,” she notes. “These tools not only help me study the question I want to answer, but are also general enough to help study a broader set of questions around multiple challenges.”

The new tools she’s developed, along with a willingness to engage with other disciplines, have helped her discover innovative ways to approach these challenges while learning to work with new ones, options she asserts are actively encouraged at an institution like MIT.

“I benefited from having an open mind and learning different things,” she says.

“I was introduced to academia late”

Garg’s journey from Kaithal, India, to MIT wasn’t especially smooth, as societal pressures exerted a powerful influence. “The traditional path for someone like me is to finish school, enter an arranged marriage, and start a family,” she says. “But I was good at school and wanted to do more.” 

Garg, who hails from a background with limited access to information on career development opportunities, took to math early. “I chose business in high school because I planned to become an accountant,” she recalls. “My uncle was an accountant.”

While pursuing the successful completion of a high school business track, she became interested in economics. “I didn’t know much about economics, but I came to enjoy it,” she says. Garg relishes the pursuit of deductive reasoning that begins with a set of assumptions and builds, step by step, toward a well-defined, clear conclusion. She especially enjoys grappling with the arguments she found in textbooks. She continued to study economics as an undergraduate at the University of Delhi, and later earned her master’s from the Indian Statistical Institute. Doctoral study wasn’t an option until she made it one.

“It took me some time to convince my parents,” she says. She spent a year at a hedge fund before applying to economics doctoral programs in the United States and choosing MIT. “I was introduced to academia late,” she notes. “But my heart was being drawn to the academic path.”

Answering ambitious and important questions

Garg, who hadn’t left India before her arrival in Cambridge, Massachusetts, found the transition challenging. “There were new cultural norms, a language barrier, different foods, and no preexisting social network,” she says. Garg relied on friends and MIT faculty for support when she arrived in 2019. 

“When Covid hit, the department looked out for me,” she says. Garg recalls regular check-ins from a faculty advisor and the kind of camaraderie that can grow from shared circumstances, like Covid-related sheltering protocols. A world that forced her to successfully navigate a new and unfamiliar reality helped reshape how she viewed herself. “Support from the community at MIT helped me grow in many ways,” she recalls, “I found my voice here.”

Once she began her studies, one of the major differences Garg found was the diversity of opinions in her field of inquiry. “At MIT, I could speak with students and faculty specializing in trade, development economics, industrial organization, macroeconomics, and more,” she says. “I had limited exposure to many of these subfields before coming to MIT.” 

She quickly found her footing, leaning heavily on both her past successes and the academic habits she developed during her studies in India. “I’m not a passive learner,” she says. “My style is active, critical, and engaged.”

Conducting her research exposes Garg to new ideas. She learned the value of exploring other disciplines’ approaches to problem-solving, which was encouraged and enabled at MIT. 

One of the classes she came to enjoy most was a course in industrial organization taught by Tobias Salz. “I had little familiarity with the material, and it was highly technical — but he taught it in such a clear and intuitive way that I found myself truly enjoying the class, even though it was held during the pandemic,” she recalls. That early experience laid the groundwork for future research. Salz went on to advise her dissertation, helping her engage with work she would build upon.

“Answering ambitious and important questions is what draws me to the work,” Garg says. “I enjoy learning, I enjoy the creative process of bringing different ideas together and MIT's environment has made it easy for me to pick up new things.”

Working with her advisors at MIT helped Garg formalize her research and appreciate the value of uncovering questions and developing approaches to answer them. Professor Abhijit Banerjee, an advisor and Nobel laureate, helped her understand the importance of appreciating different traditions while also staying true to how you think about the problem, she recalls. “That gave me the confidence to pursue the questions in ways that felt most compelling and personal to me,” she says, “even if they didn’t fit neatly into disciplinary boundaries.”

This encouragement, combined with the breadth of perspectives at MIT, pushed her to think creatively about research challenges and to look beyond traditional tools to discover solutions. “MIT’s faculty have helped me improve the way I think and refine my approach to this work,” she says.

Paying it forward

Garg, who will continue her research as a postdoc at Princeton University in the fall and begin her career as a professor at Stanford University in 2026, singles out her network of friends and advisors for special praise.

“From regular check-ins with my advisors to the relationships that help me find balance with my studies, the people at MIT have been invaluable,” she says. 

Garg is especially invested in mentorship opportunities available as a researcher and professor. “I benefited from the network of friends and mentors at MIT and I want to pay it forward — especially for women, and others from backgrounds like mine,” she says.

She cites the work of her advisors, David Atkin and Dave Donaldson — with whom she is also collaborating on research studying incidences of economic distortions — as both major influences on her development and a key reason she’s committed to mentoring others. “They’ve been with me every step of the way,” she says. 

Garg recommends keeping an open mind, above all. “Some of my students didn’t come from a math-heavy background and would restrict themselves or otherwise get discouraged from pursuing theoretical work,” she says. “But I always encouraged them to pursue their interests above all, even if it scared them.” 

The variety of ideas available in her area of inquiry still fascinates Garg, who’s excited about what’s next. “Don’t shy from big questions,” she says. “Explore the big idea.”


AI-enabled translations initiative empowers Ukrainian learners with new skills

Ukrainian students and collaborators provide high-quality translations of MIT OpenCourseWare educational resources.


With war continuing to disrupt education for millions of Ukrainian high school and college students, many are turning to online resources, including MIT OpenCourseWare, a part of MIT Open Learning offering educational materials from more than 2,500 MIT undergraduate and graduate courses.

For Ukrainian high school senior Sofiia Lipkevych and other students, MIT OpenCourseWare has provided valuable opportunities to take courses in key subject areas. However, while multiple Ukrainian students study English, many do not yet have sufficient command of the language to be able to fully understand and use the often very technical and complex OpenCourseWare content and materials.

“At my school, I saw firsthand how language barriers prevented many Ukrainian students from accessing world-class education,” says Lipkevych.

She was able to address this challenge as a participant in the Ukrainian Leadership and Technology Academy (ULTA), established by Ukrainian MIT students Dima Yanovsky and Andrii Zahorodnii. During summer 2024 at ULTA, Lipkevych worked on a browser extension that translated YouTube videos in real-time. Since MIT OpenCourseWare was a main source of learning materials for students participating in ULTA, she was inspired to translate OpenCourseWare lectures directly and to have this translation widely available on the OpenCourseWare website and YouTube channel. She reached out to Professor Elizabeth Wood, founding director of the MIT Ukraine Program, who connected her with MIT OpenCourseWare Director Curt Newton.

Although there had been some translations of MIT OpenCourseWare’s educational resources available beginning in 2004, these initial translations were conducted manually by several global partners, without the efficiencies of the latest artificial intelligence tools, and over time the programs couldn’t be sustained, and shut down.

“We were thrilled to have this contact with ULTA,” says Newton. “We’ve been missing having a vibrant translation community, and we are excited to have a ‘phase 2’ of translations emerge.”

The ULTA team selected courses to translate based on demand among Ukrainian students, focusing on foundational subjects that are prerequisites for advanced learning — particularly those for which high-quality, Ukrainian-language materials are scarce. Starting with caption translations on videos of lectures, the team has translated the following courses so far: 18.06 (Linear Algebra)2.003SC (Engineering Dynamics), 5.60 (Thermodynamics & Kinetics)6.006 (Introduction to Algorithms), and 6.0001 (Introduction to Computer Science and Programming in Python). They also worked directly with Andy Eskenazi, a PhD student in the MIT Department of Aeronautics and Astronautics, to translate 16.002 (How to CAD Almost Anything - Siemens NX Edition).

The ULTA team developed multiple tools to help break language barriers. For MIT OpenCourseWare’s PDF content available through the ULTA program, they created a specialized tool that uses optical character recognition to recognize LaTeX in documents — such as problem sets and other materials — and then used a few large language models to translate them, all while maintaining technical accuracy. The team built a glossary of technical terms used in the courses and their corresponding Ukrainian translations, to help make sure that the wording was correct and consistent. Each translation also undergoes human review to further ensure accuracy and high quality.

For video content, the team initially created a browser extension that can translate YouTube video captions in real-time. They ultimately collaborated with ElevenLabs, implementing their advanced AI dubbing editor that preserves the original speaker's tone, pace, and emotional delivery. The lectures are translated in the ElevenLabs dubbing editor, and then the audio is uploaded to the MIT OpenCourseWare YouTube channel.

The team is currently finalizing the translation of the audio for class 9.13 (The Human Brain), taught by MIT Professor Nancy Kanwisher, which Lipkevych says they selected for its interdisciplinary nature and appeal to a wide variety of learners.

This Ukrainian translation project highlights the transformative potential of the latest translation technologies, building upon a 2023 MIT OpenCourseWare experiment using the Google Aloud AI dubbing prototype on a few courses, including MIT Professor Patrick Winston’s How to Speak. The advanced capabilities of the dubbing editor used in this project are opening up possibilities for a much greater variety of language offerings throughout MIT OpenCourseWare materials.

“I expect that in a few years we’ll look back and see that this was the moment when things shifted for OpenCourseWare to be truly usable for the whole world,” says Newton.

Community-led language translations of MIT OpenCourseWare materials serve as a high-impact example of the power of OpenCourseWare’s Creative Commons licensing, which grants everyone the right to revise materials to suit their particular needs and redistribute those revisions to the world.

While there isn’t currently a way for users of the MIT OpenCourseWare platform to quickly identify which videos are available in which languages, MIT OpenCourseWare is working toward building this capability into its website, as well as expanding its number of offerings in different languages.

“This project represents more than just translation,” says Lipkevych. “We’re enabling thousands of Ukrainians to build skills that will be essential for the country’s eventual reconstruction. We’re also hoping this model of collaboration can be extended to other languages and institutions, creating a template for making high-quality education accessible worldwide.”


The MIT-Portugal Program enters Phase 4

New phase will support continued exploration of ideas and solutions in fields ranging from AI to nanotech to climate — with emphasis on educational exchanges and entrepreneurship.


Since its founding 19 years ago as a pioneering collaboration with Portuguese universities, research institutions and corporations, the MIT-Portugal Program (MPP) has achieved a slew of successes — from enabling 47 entrepreneurial spinoffs and funding over 220 joint projects between MIT and Portuguese researchers to training a generation of exceptional researchers on both sides of the Atlantic.

In March, with nearly two decades of collaboration under their belts, MIT and the Portuguese Science and Technology Foundation (FCT) signed an agreement that officially launches the program’s next chapter. Running through 2030, MPP’s Phase 4 will support continued exploration of innovative ideas and solutions in fields ranging from artificial intelligence and nanotechnology to climate change — both on the MIT campus and with partners throughout Portugal.  

“One of the advantages of having a program that has gone on so long is that we are pretty well familiar with each other at this point. Over the years, we’ve learned each other’s systems, strengths and weaknesses and we’ve been able to create a synergy that would not have existed if we worked together for a short period of time,” says Douglas Hart, MIT mechanical engineering professor and MPP co-director.

Hart and John Hansman, the T. Wilson Professor of Aeronautics and Astronautics at MIT and MPP co-director, are eager to take the program’s existing research projects further, while adding new areas of focus identified by MIT and FCT. Known as the Fundação para a Ciência e Tecnologia in Portugal, FCT is the national public agency supporting research in science, technology and innovation under Portugal’s Ministry of Education, Science and Innovation.

“Over the past two decades, the partnership with MIT has built a foundation of trust that has fostered collaboration among researchers and the development of projects with significant scientific impact and contributions to the Portuguese economy,” Fernando Alexandre, Portugal’s minister for education, science, and innovation, says. “In this new phase of the partnership, running from 2025 to 2030, we expect even greater ambition and impact — raising Portuguese science and its capacity to transform the economy and improve our society to even higher levels, while helping to address the challenges we face in areas such as climate change and the oceans, digitalization, and space.”

“International collaborations like the MIT-Portugal Program are absolutely vital to MIT’s mission of research, education and service. I’m thrilled to see the program move into its next phase,” says MIT President Sally Kornbluth. “MPP offers our faculty and students opportunities to work in unique research environments where they not only make new findings and learn new methods but also contribute to solving urgent local and global problems. MPP’s work in the realm of ocean science and climate is a prime example of how international partnerships like this can help solve important human problems."

Sharing MIT’s commitment to academic independence and excellence, Kornbluth adds, “the institutions and researchers we partner with through MPP enhance MIT’s ability to achieve its mission, enabling us to pursue the exacting standards of intellectual and creative distinction that make MIT a cradle of innovation and world leader in scientific discovery.”

The epitome of an effective international collaboration, MPP has stayed true to its mission and continued to deliver results here in the U.S. and in Portugal for nearly two decades — prevailing amid myriad shifts in the political, social, and economic landscape. The multifaceted program encompasses an annual research conference and educational summits such as an Innovation Workshop at MIT each June and a Marine Robotics Summer School in the Azores in July, as well as student and faculty exchanges that facilitate collaborative research. During the third phase of the program alone, 59 MIT students and 53 faculty and researchers visited Portugal, and MIT hosted 131 students and 49 faculty and researchers from Portuguese universities and other institutions.

In each roughly five-year phase, MPP researchers focus on a handful of core research areas. For Phase 3, MPP advanced cutting-edge research in four strategic areas: climate science and climate change; Earth systems: oceans to near space; digital transformation in manufacturing; and sustainable cities. Within these broad areas, MIT and FCT researchers worked together on numerous small-scale projects and several large “flagship” ones, including development of Portugal’s CubeSat satellite, a collaboration between MPP and several Portuguese universities and companies that marked the country’s second satellite launch and the first in 30 years.

While work in the Phase 3 fields will continue during Phase 4, researchers will also turn their attention to four more areas: chips/nanotechnology, energy (a previous focus in Phase 2), artificial intelligence, and space.

“We are opening up the aperture for additional collaboration areas,” Hansman says.

In addition to focusing on distinct subject areas, each phase has emphasized the various parts of MPP’s mission to differing degrees. While Phase 3 accentuated collaborative research more than educational exchanges and entrepreneurship, those two aspects will be given more weight under the Phase 4 agreement, Hart said.

“We have approval in Phase 4 to bring a number of Portuguese students over, and our principal investigators will benefit from close collaborations with Portuguese researchers,” he says.

The longevity of MPP and the recent launch of Phase 4 are evidence of the program’s value. The program has played a role in the educational, technological and economic progress Portugal has achieved over the past two decades, as well.  

“The Portugal of today is remarkably stronger than the Portugal of 20 years ago, and many of the places where they are stronger have been impacted by the program,” says Hansman, pointing to sustainable cities and “green” energy, in particular. “We can’t take direct credit, but we’ve been part of Portugal’s journey forward.”

Since MPP began, Hart adds, “Portugal has become much more entrepreneurial. Many, many, many more start-up companies are coming out of Portuguese universities than there used to be.”  

recent analysis of MPP and FCT’s other U.S. collaborations highlighted a number of positive outcomes. The report noted that collaborations with MIT and other US universities have enhanced Portuguese research capacities and promoted organizational upgrades in the national R&D ecosystem, while providing Portuguese universities and companies with opportunities to engage in complex projects that would have been difficult to undertake on their own.

Regarding MIT in particular, the report found that MPP’s long-term collaboration has spawned the establishment of sustained doctoral programs and pointed to a marked shift within Portugal’s educational ecosystem toward globally aligned standards. MPP, it reported, has facilitated the education of 198 Portuguese PhDs.

Portugal’s universities, students and companies are not alone in benefitting from the research, networks, and economic activity MPP has spawned. MPP also delivers unique value to MIT, as well as to the broader US science and research community. Among the program’s consistent themes over the years, for example, is “joint interest in the Atlantic,” Hansman says.

This summer, Faial Island in the Azores will host MPP’s fifth annual Marine Robotics Summer School, a two-week course open to 12 Portuguese Master’s and first year PhD students and 12 MIT upper-level undergraduates and graduate students. The course, which includes lectures by MIT and Portuguese faculty and other researchers, workshops, labs and hands-on experiences, “is always my favorite,” said Hart.

“I get to work with some of the best researchers in the world there, and some of the top students coming out of Woods Hole Oceanographic Institution, MIT, and Portugal,” he says, adding that some of his previous Marine Robotics Summer School students have come to study at MIT and then gone on to become professors in ocean science.

“So, it’s been exciting to see the growth of students coming out of that program, certainly a positive impact,” Hart says.

MPP provides one-of-a-kind opportunities for ocean research due to the unique marine facilities available in Portugal, including not only open ocean off the Azores but also Lisbon’s deep-water port and a Portuguese Naval facility just south of Lisbon that is available for collaborative research by international scientists. Like MIT, Portuguese universities are also strongly invested in climate change research — a field of study keenly related to ocean systems.

“The international collaboration has allowed us to test and further develop our research prototypes in different aquaculture environments both in the US and in Portugal, while building on the unique expertise of our Portuguese faculty collaborator Dr. Ricardo Calado from the University of Aveiro and our industry collaborators,” says Stefanie Mueller, the TIBCO Career Development Associate Professor in MIT’s departments of Electrical Engineering and Computer Science and Mechanical Engineering and leader of the Human-Computer Interaction Group at the MIT Computer Science and Artificial Intelligence Lab.

Mueller points to the work of MIT mechanical engineering PhD student Charlene Xia, a Marine Robotics Summer School participant, whose research is aimed at developing an economical system to monitor the microbiome of seaweed farms and halt the spread of harmful bacteria associated with ocean warming. In addition to participating in the summer school as a student, Xia returned to the Azores for two subsequent years as a teaching assistant.

“The MIT-Portugal Program has been a key enabler of our research on monitoring the aquatic microbiome for potential disease outbreaks,” Mueller says.

As MPP enters its next phase, Hart and Hansman are optimistic about the program’s continuing success on both sides of the Atlantic and envision broadening its impact going forward.

“I think, at this point, the research is going really well, and we’ve got a lot of connections. I think one of our goals is to expand not the science of the program necessarily, but the groups involved,” Hart says, noting that MPP could have a bigger presence in technical fields such as AI and micro-nano manufacturing, as well as in social sciences and humanities.

“We’d like to involve many more people and new people here at MIT, as well as in Portugal,” he says, “so that we can reach a larger slice of the population.” 


MIT engineers advance toward a fault-tolerant quantum computer

Researchers achieved a type of coupling between artificial atoms and photons that could enable readout and processing of quantum information in a few nanoseconds.


In the future, quantum computers could rapidly simulate new materials or help scientists develop faster machine-learning models, opening the door to many new possibilities.

But these applications will only be possible if quantum computers can perform operations extremely quickly, so scientists can make measurements and perform corrections before compounding error rates reduce their accuracy and reliability.

The efficiency of this measurement process, known as readout, relies on the strength of the coupling between photons, which are particles of light that carry quantum information, and artificial atoms, units of matter that are often used to store information in a quantum computer.

Now, MIT researchers have demonstrated what they believe is the strongest nonlinear light-matter coupling ever achieved in a quantum system. Their experiment is a step toward realizing quantum operations and readout that could be performed in a few nanoseconds.

The researchers used a novel superconducting circuit architecture to show nonlinear light-matter coupling that is about an order of magnitude stronger than prior demonstrations, which could enable a quantum processor to run about 10 times faster.

There is still much work to be done before the architecture could be used in a real quantum computer, but demonstrating the fundamental physics behind the process is a major step in the right direction, says Yufeng “Bright” Ye SM ’20, PhD ’24, lead author of a paper on this research.

“This would really eliminate one of the bottlenecks in quantum computing. Usually, you have to measure the results of your computations in between rounds of error correction. This could accelerate how quickly we can reach the fault-tolerant quantum computing stage and be able to get real-world applications and value out of our quantum computers,” says Ye.

He is joined on the paper by senior author Kevin O’Brien, an associate professor and principal investigator in the Research Laboratory of Electronics (RLE) at MIT who leads the Quantum Coherent Electronics Group in the Department of Electrical Engineering and Computer Science (EECS). Additional MIT co-authors, with affiliations in RLE and/or MIT Lincoln Laboratory, include Jeremy B. Kline, Alec Yen, Gregory Cunningham, Max Tan, Alicia Zang, Michael Gingras, Bethany M. Niedzielski, Hannah Stickler, Kyle Serniak, and Mollie E. Schwartz. The research appears today in Nature Communications.

A new coupler

This physical demonstration builds on years of theoretical research in the O’Brien group.

After Ye joined the lab as a PhD student in 2019, he began developing a specialized photon detector to enhance quantum information processing.

Through that work, he invented a new type of quantum coupler, which is a device that facilitates interactions between qubits. Qubits are the building blocks of a quantum computer. This so-called quarton coupler had so many potential applications in quantum operations and readout that it quickly became a focus of the lab.

This quarton coupler is a special type of superconducting circuit that has the potential to generate extremely strong nonlinear coupling, which is essential for running most quantum algorithms. As the researchers feed more current into the coupler, it creates an even stronger nonlinear interaction. In this sense, nonlinearity means a system behaves in a way that is greater than the sum of its parts, exhibiting more complex properties.

“Most of the useful interactions in quantum computing come from nonlinear coupling of light and matter. If you can get a more versatile range of different types of coupling, and increase the coupling strength, then you can essentially increase the processing speed of the quantum computer,” Ye explains.

For quantum readout, researchers shine microwave light onto a qubit and then, depending on whether that qubit is in state 0 or 1, there is a frequency shift on its associated readout resonator. They measure this shift to determine the qubit’s state.

Nonlinear light-matter coupling between the qubit and resonator enables this measurement process.

The MIT researchers designed an architecture with a quarton coupler connected to two superconducting qubits on a chip. They turn one qubit into a resonator and use the other qubit as an artificial atom which stores quantum information. This information is transferred in the form of microwave light particles called photons.

“The interaction between these superconducting artificial atoms and the microwave light that routes the signal is basically how an entire superconducting quantum computer is built,” Ye explains.

Enabling faster readout

The quarton coupler creates nonlinear light-matter coupling between the qubit and resonator that’s about an order of magnitude stronger than researchers had achieved before. This could enable a quantum system with lightning-fast readout.

“This work is not the end of the story. This is the fundamental physics demonstration, but there is work going on in the group now to realize really fast readout,” O’Brien says.

That would involve adding additional electronic components, such as filters, to produce a readout circuit that could be incorporated into a larger quantum system.

The researchers also demonstrated extremely strong matter-matter coupling, another type of qubit interaction that is important for quantum operations. This is another area they plan to explore with future work.

Fast operations and readout are especially important for quantum computers because qubits have finite lifespans, a concept known as coherence time.

Stronger nonlinear coupling enables a quantum processor to run faster and with lower error, so the qubits can perform more operations in the same amount of time. This means the qubits can run more rounds of error correction during their lifespans.

“The more runs of error correction you can get in, the lower the error will be in the results,” Ye says.

In the long run, this work could help scientists build a fault-tolerant quantum computer, which is essential for practical, large-scale quantum computation.

This research was supported, in part, by the Army Research Office, the AWS Center for Quantum Computing, and the MIT Center for Quantum Engineering.


In kids, EEG monitoring of consciousness safely reduces anesthetic use

Clinical trial finds several outcomes improved for young children when an anesthesiologist observed their brain waves to guide dosing of sevoflurane during surgery.


Newly published results of a randomized, controlled clinical trial in Japan among more than 170 children aged 1 to 6 who underwent surgery show that by using electroencephalogram (EEG) readings of brain waves to monitor unconsciousness, an anesthesiologist can significantly reduce the amount of the anesthesia administered to safely induce and sustain each patient’s anesthetized state. On average, the little patients experienced significant improvements in several post-operative outcomes, including quicker recovery and reduced incidence of delirium.

“I think the main takeaway is that in kids, using the EEG, we can reduce the amount of anesthesia we give them and maintain the same level of unconsciousness,” says study co-author Emery N. Brown, the Edward Hood Taplin Professor of Medical Engineering and Computational Neuroscience at MIT, an anesthesiologist at Massachusetts General Hospital, and a professor at Harvard Medical School. The study appeared April 21 in JAMA Pediatrics.

Yasuko Nagasaka, chair of anesthesiology at Tokyo Women’s Medical University and a former colleague of Brown’s in the United States, designed the study. She asked Brown to train and advise lead author Kiyoyuki Miyasaka of St. Luke’s International Hospital in Tokyo on how to use EEG to monitor unconsciousness and adjust anesthesia dosing in children. Miyasaka then served as the anesthesiologist for all patients in the trial. Attending anesthesiologists not involved in the study were always on hand to supervise.

Brown’s research in The Picower Institute for Learning and Memory, the Institute for Medical Engineering and Science, and the Department of Brain and Cognitive Sciences at MIT has shown that a person’s level of consciousness under any particular anesthetic drug is discernible from patterns of their brain waves. Each child’s brain waves were measured with EEG, but in the control group Miyasaka adhered to standard anesthesia dosing protocols while in the experimental group he used the EEG measures as a guide for dosing. The results show that when he used EEG, he was able to induce the desired level of unconsciousness with a concentration of 2 percent sevoflurane gas, rather than the standard 5 percent. Maintenance of unconsciousness, meanwhile, only turned out to require 0.9 percent concentration, rather than the standard 2.5 percent.

Meanwhile, a separate researcher, blinded to whether EEG or standard protocols were used, assessed the kids for “pediatric anesthesia emergence delirium” (PAED), in which children sometimes wake up from anesthesia with a set of side effects including lack of eye contact, inconsolability, unawareness of surroundings, restlessness, and non-purposeful movements. Children who received standard anesthesia dosing met the threshold for PAED in 35 percent of cases (30 out of 86), while children who received EEG-guided dosing met the threshold in 21 percent of cases (19 out of 91). The difference of 14 percentage points was statistically significant.

Meanwhile, the authors reported that, on average, EEG-guided patients had breathing tubes removed 3.3 minutes earlier, emerged from anesthesia 21.4 minutes earlier, and were discharged from post-acute care 16.5 minutes earlier than patients who received anesthesia according to the standard protocol. All of these differences were statistically significant. Also, no child in the study ever became aware during surgery.

The authors noted that the quicker recovery among patients who received EEG-guided anesthesia was not only better medically, but also reduced health-care costs. Time in post-acute care in the United States costs about $46 a minute, so the average reduced time of 16.5 minutes would save about $750 per case. Sevoflurane is also a potent greenhouse gas, Brown notes, so reducing its use is better for the environment.

In the study, the authors also present comparisons of the EEG recordings from children in the control and experimental groups. There are notable differences in the “spectrograms” that charted the power of individual brain wave frequencies both as children were undergoing surgery and while they were approaching emergence from anesthesia, Brown says.

For instance, among children who received EEG-guided dosing, there are well-defined bands of high power at about 1-3 Hertz and 10-12 Hz. In children who received standard protocol dosing, the entire range of frequencies up to about 15 Hz are at high power. In another example, children who experienced PAED showed higher power at several frequencies up to 30Hz than children who did not experience PAED.

The findings further validate the idea that monitoring brain waves during surgery can provide anesthesiologists with actionable guidance to improve patient care, Brown says. Training in reading EEGs and guiding dosing can readily be integrated in the continuing medical education practices of hospitals, he adds.

In addition to Miyasuka, Brown, and Nagasaka, Yasuyuki Suzuki is a study co-author.

Funding sources for the study include the MIT-Massachusetts General Brigham Brain Arousal State Control Innovation Center, the Freedom Together Foundation, and the Picower Institute.


Lighting up biology’s basement lab

Senior Technical Instructor Vanessa Cheung ’02 brings the energy, experience, and excitement needed to educate students in the biology teaching lab.


For more than 30 years, Course 7 (Biology) students have descended to the expansive, windowless basement of Building 68 to learn practical skills that are the centerpiece of undergraduate biology education at the Institute. The lines of benches and cabinets of supplies that make up the underground MIT Biology Teaching Lab could easily feel dark and isolated. 

In the corner of this room, however, sits Senior Technical Instructor Vanessa Cheung ’02, who manages to make the space seem sunny and communal.

“We joke that we could rig up a system of mirrors to get just enough daylight to bounce down from the stairwell,” Cheung says with a laugh. “It is a basement, but I am very lucky to have this teaching lab space. It is huge and has everything we need.”

This optimism and gratitude fostered by Cheung is critical, as MIT undergrad students enrolled in classes 7.002 (Fundamentals of Experimental Molecular Biology) and 7.003 (Applied Molecular Biology Laboratory) spend four-hour blocks in the lab each week, learning the foundations of laboratory technique and theory for biological research from Cheung and her colleagues.

Running toward science education

Cheung’s love for biology can be traced back to her high school cross country and track coach, who also served as her second-year biology teacher. The sport and the fundamental biological processes she was learning about in the classroom were, in fact, closely intertwined. 

“He told us about how things like ATP [adenosine triphosphate] and the energy cycle would affect our running,” she says. “Being able to see that connection really helped my interest in the subject.”

That inspiration carried her through a move from her hometown of Pittsburgh, Pennsylvania, to Cambridge, Massachusetts, to pursue an undergraduate degree at MIT, and through her thesis work to earn a PhD in genetics at Harvard Medical School. She didn’t leave running behind either: To this day, she can often be found on the Charles River Esplanade, training for her next marathon. 

She discovered her love of teaching during her PhD program. She enjoyed guiding students so much that she spent an extra semester as a teaching assistant, outside of the one required for her program. 

“I love research, but I also really love telling people about research,” Cheung says.

Cheung herself describes lab instruction as the “best of both worlds,” enabling her to pursue her love of teaching while spending every day at the bench, doing experiments. She emphasizes for students the importance of being able not just to do the hands-on technical lab work, but also to understand the theory behind it.

“The students can tend to get hung up on the physical doing of things — they are really concerned when their experiments don’t work,” she says. “We focus on teaching students how to think about being in a lab — how to design an experiment and how to analyze the data.”

Although her talent for teaching and passion for science led her to the role, Cheung doesn’t hesitate to identify the students as her favorite part of the job. 

“It sounds cheesy, but they really do keep the job very exciting,” she says.

Using mind and hand in the lab

Cheung is the type of person who lights up when describing how much she “loves working with yeast.” 

“I always tell the students that maybe no one cares about yeast except me and like three other people in the world, but it is a model organism that we can use to apply what we learn to humans,” Cheung explains.

Though mastering basic lab skills can make hands-on laboratory courses feel “a bit cookbook,” Cheung is able to get the students excited with her enthusiasm and clever curriculum design. 

“The students like things where they can get their own unique results, and things where they have a little bit of freedom to design their own experiments,” she says. So, the lab curriculum incorporates opportunities for students to do things like identify their own unique yeast mutants and design their own questions to test in a chemical engineering module.

Part of what makes theory as critical as technique is that new tools and discoveries are made frequently in biology, especially at MIT. For example, there has been a shift from a focus on RNAi to CRISPR as a popular lab technique in recent years, and Cheung muses that CRISPR itself may be overshadowed within only a few more years — keeping students learning at the cutting edge of biology is always on Cheung’s mind. 

“Vanessa is the heart, soul, and mind of the biology lab courses here at MIT, embodying ‘mens et manus’ [‘mind and hand’],” says technical lab instructor and Biology Teaching Lab Manager Anthony Fuccione. 

Support for all students

Cheung’s ability to mentor and guide students earned her a School of Science Dean’s Education and Advising Award in 2012, but her focus isn’t solely on MIT undergraduate students. 

In fact, according to Cheung, the earlier students can be exposed to science, the better. In addition to her regular duties, Cheung also designs curriculum and teaches in the LEAH Knox Scholars Program. The two-year program provides lab experience and mentorship for low-income Boston- and Cambridge-area high school students. 

Paloma Sanchez-Jauregui, outreach programs coordinator who works with Cheung on the program, says Cheung has a standout “growth mindset” that students really appreciate.

“Vanessa teaches students that challenges — like unexpected PCR results — are part of the learning process,” Sanchez-Jauregui says. “Students feel comfortable approaching her for help troubleshooting experiments or exploring new topics.”

Cheung’s colleagues report that they admire not only her talents, but also her focus on supporting those around her. Technical Instructor and colleague Eric Chu says Cheung “offers a lot of help to me and others, including those outside of the department, but does not expect reciprocity.”

Professor of biology and co-director of the Department of Biology undergraduate program Adam Martin says he “rarely has to worry about what is going on in the teaching lab.” According to Martin, Cheung is ”flexible, hard-working, dedicated, and resilient, all while being kind and supportive to our students. She is a joy to work with.” 


Exploring new frontiers in mineral extraction

Professor Thomas Peacock’s research aims to better understand the impact of deep-sea mining.


The ocean’s deep-sea bed is scattered with ancient rocks, each about the size of a closed fist, called “polymetallic nodules.” Elsewhere, along active and inactive hydrothermal vents and the deep ocean’s ridges, volcanic arcs, and tectonic plate boundaries, and on the flanks of seamounts, lie other types of mineral-rich deposits containing high-demand minerals.

The minerals found in the deep ocean are used to manufacture products like the lithium-ion batteries used to power electric vehicles, cell phones, or solar cells. In some cases, the estimated resources of critical mineral deposits in parts of the abyssal ocean exceed global land-based reserves severalfold.

“Society wants electric-powered vehicles, solar cells for clean energy, but all of this requires resources,” says Thomas Peacock, professor of mechanical engineering at MIT, in a video discussing his research. “Land-based resources are getting depleted, or are more challenging to access. In parts of the ocean, there are much more of these resources than in land-based reserve. The question is: Can it be less impactful to mine some of these resources from the ocean, rather than from land?”

Deep-sea mining is a new frontier in mineral extraction, with potentially significant implications for industry and the global economy, and important environmental and societal considerations. Through research, scientists like Peacock study the impacts of deep-sea mining activity objectively and rigorously, and can bring evidence to bear on decision-making. 

Mining activities, whether on land or at sea, can have significant impacts on the environment at local, regional, and global scales. As interest in deep-seabed mining is increasing, driven by the surging demand for critical minerals, scientific inquiries help illuminate the trade-offs.

Peacock has long studied the potential impacts of deep-sea mining in a region of the Pacific Ocean known as the Clarion Clipperton Zone (CCZ), where polymetallic nodules abound. A decade ago, his research group began studying deep-sea mining, seeing a critical need to develop monitoring and modeling capabilities for assessing the scale of impact.

Today, his MIT Environmental Dynamics Laboratory (ENDLab) is at the forefront of advancing understanding for emerging ocean utilization technologies. With research anchored in fundamental fluid dynamics, the team is developing cutting-edge monitoring programs, novel sensors, and modeling tools.

“We are studying the form of suspended sediment from deep sea mining operations, testing a new sensor for sediment and another new sensor for turbulence, studying the initial phases of the sediment plume development, and analyzing data from the 2021 and 2022 technology trials in the Pacific Ocean,” he explains.

In deep-sea nodule mining, vehicles collect nodules from the ocean floor and convey them back to a vessel above. After the critical materials are collected on the vessel, some leftover sediment may be returned to the deep-water column. The resulting sediment plumes, and their potential impacts, are a key focus of the team’s work.

A 2022 study conducted in the CCZ investigated the dynamics of sediment plumes near a deep-seabed polymetallic nodule mining vehicle. The experiments reveal most of the released sediment-laden water, between 92 and 98 percent, stayed close to the sea-bed floor, spreading laterally. The results suggest that turbidity current dynamics set the fraction of sediment that remains suspended in the water, along with the scale of the subsequent ambient sediment plume. The implications of the process, which had been previously overlooked, are substantial for plume modeling and informative for environmental impact statements.

“New model breakthroughs can help us make increasingly trustworthy predictions,” he says. The team also contributed to a recent study, published in the journal Nature, which showed that sediment deposited away from a test mining site gets cleared away, most likely by ocean currents, and reported on any observed biological recovery.

Researchers observed a site four decades after a nodule test mining experiment. Although biological impacts in many groups of organisms were present, populations of several organisms, including sediment macrofauna, mobile deposit feeders, and even large-sized sessile fauna, had begun to reestablish despite persistent physical changes at the seafloor. The study was led by the National Oceanography Centre in the U.K.

“A great deal has been learned about the fluid mechanics of deep-sea mining, in particular when it comes to deep-sea mining sediment plumes,” says Peacock, adding that the scientific progress continues with more results on the way. The work is setting new standards for in-situ monitoring of suspended sediment properties, and for how to interpret field data from recent technical trials.


Will the vegetables of the future be fortified using tiny needles?

Researchers showed they can inexpensively produce silk microneedles to deliver vitamins or agrochemicals to plants.


When farmers apply pesticides to their crops, 30 to 50 percent of the chemicals end up in the air or soil instead of on the plants. Now, a team of researchers from MIT and Singapore has developed a much more precise way to deliver substances to plants: tiny needles made of silk.

In a study published today in Nature Nanotechnology, the researchers developed a way to produce large amounts of these hollow silk microneedles. They used them to inject agrochemicals and nutrients into plants, and to monitor their health.

“There’s a big need to make agriculture more efficient,” says Benedetto Marelli, the study’s senior author and an associate professor of civil and environmental engineering at MIT. “Agrochemicals are important for supporting our food system, but they’re also expensive and bring environmental side effects, so there’s a big need to deliver them precisely.”

Yunteng Cao PhD ’22, currently a postdoc Yale University, and Doyoon Kim, a former postdoc in the Marelli lab, led the study, which included a collaboration with the Disruptive and Sustainable Technologies for Agricultural Precision (DiSTAP) interdisciplinary research group at the Singapore-MIT Alliance for Research and Technology (SMART).

In demonstrations, the team used the technique to give plants iron to treat a disease known as chlorosis, and to add vitamin B12 to tomato plants to make them more nutritious. The researchers also showed the microneedles could be used to monitor the quality of fluids flowing into plants and to detect when the surrounding soil contained heavy metals.

Overall, the researchers believe the microneedles could serve as a new kind of plant interface for real-time health monitoring and biofortification.

“These microneedles could be a tool for plant scientists so they can understand more about plant health and how they grow,” Marelli says. “But they can also be used to add value to crops, making them more resilient and possibly even increasing yields.”

The inner workings of plants

Accessing the inner tissues of living plants requires scientists to get through the plants’ waxy skin without causing too much stress. In previous work, the researchers used silk-based microneedles to deliver agrochemicals to plants in lab environments and to detect pH changes in living plants. But these initial efforts involved small payloads, limiting their applications in commercial agriculture.

“Microneedles were originally developed for the delivery of vaccines or other drugs in humans,” Marelli explains. “Now we’ve adapted it so that the technology can work with plants, but initially we could not deliver sufficient doses of agrochemicals and nutrients to mitigate stressors or enhance crop nutritional values.”

Hollow structures could increase the amount of chemicals microneedles can deliver, but Marelli says creating those structures at scale has historically required clean rooms and expensive facilities like the ones found inside the MIT.nano building.

For this study, Cao and Kim created a new way to manufacture hollow silk microneedles by combining silk fibroin protein with a salty solution inside tiny, cone-shaped molds. As water evaporated from the solution, the silk solidified into the mold while the salt forms crystalline structures inside the molds. When the salt was removed, it left behind in each needle a hollow structure or tiny pores, depending on the salt concentration and the separation of the organic and inorganic phases.

“It’s a pretty simple fabrication process. It can be done outside of a clean room — you could do it in your kitchen if you wanted,” Kim says. “It doesn’t require any expensive machinery.”

The researchers then tested their microneedles’ ability to deliver iron to iron-deficient tomato plants, which can cause a disease known as chlorosis. Chlorosis can decrease yields, but treating it by spraying crops is inefficient and can have environmental side effects. The researchers showed that their hollow microneedles could be used for the sustained delivery of iron without harming the plants.

The researchers also showed their microneedles could be used to fortify crops while they grow. Historically, crop fortification efforts have focused on minerals like zinc or iron, with vitamins only added after the food is harvested.

In each case, the researchers applied the microneedles to the stalks of plants by hand, but Marelli envisions equipping autonomous vehicles and other equipment already used in farms to automate and scale the process.

As part of the study, the researchers used microneedles to deliver vitamin B12, which is primarily found naturally in animal products, into the stalks of growing tomatoes, showing that vitamin B12 moved into the tomato fruits before harvest. The researchers propose their method could be used to fortify more plants with the vitamin.

Co-author Daisuke Urano, a plant scientist with DiSTAP, explains that “through a comprehensive assessment, we showed minimal adverse effects from microneedle injections in plants, with no observed short- or long-term negative impacts.”

“This new delivery mechanism opens up a lot of potential applications, so we wanted to do something nobody had done before,” Marelli explains.

Finally, the researchers explored the use of their microneedles to monitor the health of plants by studying tomatoes growing in hydroponic solutions contaminated with cadmium, a toxic metal commonly found in farms close to industrial and mining sites. They showed their microneedles absorbed the toxin within 15 minutes of being injected into the tomato stalks, offering a path to rapid detection.

Current advanced techniques for monitoring plant health, such as colorimetric and hyperspectral lead analyses, can only detect problems after plants growth is already being stunted. Other methods, such as sap sampling, can be too time-consuming.

Microneedles, in contrast, could be used to more easily collect sap for ongoing chemical analysis. For instance, the researchers showed they could monitor cadmium levels in tomatoes over the course of 18 hours.

A new platform for farming

The researchers believe the microneedles could be used to complement existing agricultural practices like spraying. The researchers also note the technology has applications beyond agriculture, such as in biomedical engineering.

“This new polymeric microneedle fabrication technique may also benefit research in microneedle-mediated transdermal and intradermal drug delivery and health monitoring,” Cao says.

For now, though, Marelli believes the microneedles offer a path to more precise, sustainable agriculture practices.

“We want to maximize the growth of plants without negatively affecting the health of the farm or the biodiversity of surrounding ecosystems,” Marelli says. “There shouldn’t be a trade-off between the agriculture industry and the environment. They should work together.”

This work was supported, in part, by the U.S. Office of Naval Research, the U.S. National Science Foundation, SMART, the National Research Foundation of Singapore, and the Singapore Prime Minister’s Office.


At the Venice Biennale, design through flexible thinking

The renowned architecture exhibition, curated this year by MIT’s Carlo Ratti, puts an emphasis on adaptive intelligence.


When the Venice Biennale’s 19th International Architecture Exhibition launches on May 10, its guiding theme will be applying nimble, flexible intelligence to a demanding world — an ongoing focus of its curator, MIT faculty member Carlo Ratti.

The Biennale is the world’s most renowned exhibition of its kind, an international event whose subject matter shifts over time, with a new curator providing new focus every two years. This year, the Biennale’s formal theme is “Intelligens,” the Latin word behind “intelligence,” in English, and “intelligenza,” in Italian — a word that evokes both the exhibition’s international scope and the many ways humans learn, adapt, and create.

“Our title is ‘Intelligens. Natural, artificial, collective,’” notes Ratti, who is a professor of the practice of urban technologies and planning in the MIT School of Architecture and Planning. “One key point is how we can go beyond what people normally think about intelligence, whether in people or AI. In the built environment we deal with many types of feedback and need to leverage all types of intelligence to collect and use it all.”

That applies to the subject of climate change, as adaptation is an ongoing focal point for the design community, whether facing the need to rework structures or to develop new, resilient designs for cities and regions.

“I would emphasize how eager architects are today to play a big role in addressing the big crises we face on the planet we live in,” Ratti says. “Architecture is the only discipline to bring everybody together, because it means rethinking the built environment, the places we all live.”

He adds: “If you think about the fires in Los Angeles, or the floods in Valencia or Bangladesh, or the drought in Sicily, these are cases where architecture and design need to apply feedback and use intelligence.”

Not just sharing design, but creating it

The Venice Biennale is the leading event of its kind globally and one of the earliest: It started with art exhibitions in 1895 and later added biannual shows focused on other facets of culture. Since 1980, the Biennale of Architecture was held every two years, until the 2020 exhibition — curated by MIT’s Hashim Sarkis — was rescheduled to 2021 due to the Covid-19 pandemic. It is now continuing in odd-numbered years.

After its May 10 opening, this year’s exhibition runs until Nov. 23.

Ratti is a wide-ranging scholar, designer, and writer, and the long-running director of MIT’s Senseable City Lab, which has been on the leading edge of using data to understand cities as living systems.

Additionally, Ratti is a founding partner of the international design firm Carlo Ratti Associati. He graduated from the Politecnico di Torino and the École Nationale des Ponts et Chaussées in Paris, then earned his MPhil and PhD at Cambridge University. He has authored and co-authored hundeds of publications, including the books “Atlas of the Senseable City” (2023) and “The City of Tomorrow” (2016). Ratti’s work has been exhibited at the Venice Biennale, the Design Museum in Barcelona, the Science Museum in London, and the Museum of Modern Art in New York, among other venues.

In his role as curator of this year’s Biennale, Ratti adapted the traditional format to engage with some of the leading questions design faces. Ratti and the organizers created multiple forums to gather feedback about the exhibition’s possibilities, sifting through responses during the planning process.

Ratti has also publicly called this year’s Biennale a “living lab,” not just an exhibition, in accordance with the idea of learning from feedback and developing designs in response.

Back in 1895, Ratti notes, the Biennale was principally “a place to share existing knowledge, with artists and architectures coming together every two years. Today, and for a few decades, you can find almost anything in architecture and art immediately online. I think Biennales can not only be places where you share existing knowledge, but places where you create new knowledge.”

At this moment, he emphasizes, that will often mean listening to nature as we grapple with climate solutions. It also implies recognizing that nature itself inevitably responds to inputs, too.

In this vein, Ratti says, “Remember what the great architect Carlo Scarpa once said: ‘Between a tree and a house, choose the tree.’ I see that as a powerful call to learn from nature — a vast lab of trial and error, guided by feedback loops. Too often in the 20th century, architects believed they had the solution and simply needed to scale it up. The results? Frequently disastrous. Especially now, when adaptability is everything, I believe in a different approach: experimentation, feedback, iteration. That’s the spirit I hope defines this year’s Biennale.”

An MIT touch

This year, MIT will again have a robust presence at the Biennale, even beyond Ratti’s presence as curator. In the first place, he emphasizes, there is a strong team organizing the Biennale. That includes MIT graduate student Claire Gorman, who has taken a year out of her studies to serve as principal assistant to the Biennale curator.

Many of the Biennale’s projects, Gorman observes, “align ecology, technology, and culture in stunning illustrations of the fact that intelligence emerges from the complex behaviors of many parts working together. Visitors to the exhibition will discover robots and artisans collaborating alongside algae, 3D printers, ancient building practices, and new materials. … One of the strengths of the exhibition is that it includes participants who approach similar topics from different points of view.”

Overall, Gorman adds, “Our hope is that visitors will come away from the exhibition with a sense of optimism about the capacity of design fields to unite many forms of expertise.”

Numerous other Institute faculty and researchers are represented as well. For instance, Daniela Rus, head of MIT’s Computer Science and Artificial Intelligence Lab (CSAIL), has helped design an installation about using robotics in the restoration of ancient structures. And famed MIT computer scientist Tim Berners-Lee, creator of the World Wide Web, is participating in a Biennale event on intelligence.

“In choosing ‘Intelligens’ as the Venice Biennale theme, Carlo Ratti recognizes that our moment requires a holistic understanding of how different forms of intelligence — from social and ecological to computational and spatial — converge to shape our built environment,” Rus says. “The Biennale offers a timely platform to explore how architecture can mediate between these intelligences, creating buildings and cities that think with and for us.”

Even as the Biennale runs, there is also a separate exhibit in Venice showcasing MIT work in architecture and design. Running from May 10 through Nov. 23, at the Palazzo Diedo, the show, “The Next Earth: Computation, Crisis, Cosmology,” features the work of 40 faculty members in MIT’s Department of Architecture, along with entries from the think tank Antikythera.

Meanwhile, for the Biennale itself, the main exhibition hall, the Arsenale, is open, but other event spaces are being renovated. That means the organizers are using additional spaces in the city of Venice this year to showcase cutting-edge design work and installations.

“We’re turning Venice into a living lab — taking the Biennale beyond its usual borders,” Ratti says. “But there’s a bigger picture: Venice may be the world’s most fragile city, caught between rising seas and the crush of mass tourism. That’s why it could become a true laboratory for the future. Venice today could be a glimpse of the world tomorrow.” 


Merging design and computer science in creative ways

MAD Fellow Alexander Htet Kyaw connects humans, machines, and the physical world using AI and augmented reality.


The speed with which new technologies hit the market is nothing compared to the speed with which talented researchers find creative ways to use them, train them, even turn them into things we can’t live without. One such researcher is MIT MAD Fellow Alexander Htet Kyaw, a graduate student pursuing dual master’s degrees in architectural studies in computation and in electrical engineering and computer science.

Kyaw takes technologies like artificial intelligence, augmented reality, and robotics, and combines them with gesture, speech, and object recognition to create human-AI workflows that have the potential to interact with our built environment, change how we shop, design complex structures, and make physical things.

One of his latest innovations is Curator AI, for which he and his MIT graduate student partners took first prize — $26,000 in OpenAI products and cash — at the MIT AI Conference’s AI Build: Generative Voice AI Solutions, a weeklong hackathon at MIT with final presentations held last fall in New York City. Working with Kyaw were Richa Gupta (architecture) and Bradley Bunch, Nidhish Sagar, and Michael Won — all from the MIT Department of Electrical Engineering and Computer Science (EECS).

Curator AI is designed to streamline online furniture shopping by providing context-aware product recommendations using AI and AR. The platform uses AR to take the dimensions of a room with locations of windows, doors, and existing furniture. Users can then speak to the software to describe what new furnishings they want, and the system will use a vision-language AI model to search for and display various options that match both the user’s prompts and the room’s visual characteristics.

“Shoppers can choose from the suggested options, visualize products in AR, and use natural language to ask for modifications to the search, making the furniture selection process more intuitive, efficient, and personalized,” Kyaw says. “The problem we’re trying to solve is that most people don’t know where to start when furnishing a room, so we developed Curator AI to provide smart, contextual recommendations based on what your room looks like.” Although Curator AI was developed for furniture shopping, it could be expanded for use in other markets.

Another example of Kyaw’s work is Estimate, a product that he and three other graduate students created during the MIT Sloan Product Tech Conference’s hackathon in March 2024. The focus of that competition was to help small businesses; Kyaw and team decided to base their work on a painting company in Cambridge that employs 10 people. Estimate uses AR and an object-recognition AI technology to take the exact measurements of a room and generate a detailed cost estimate for a renovation and/or paint job. It also leverages generative AI to display images of the room or rooms as they might look like after painting or renovating, and generates an invoice once the project is complete.

The team won that hackathon and $5,000 in cash. Kyaw’s teammates were Guillaume Allegre, May Khine, and Anna Mathy, all of whom graduated from MIT in 2024 with master’s degrees in business analytics.

In April, Kyaw will give a TedX talk at his alma mater, Cornell University, in which he’ll describe Curator AI, Estimate, and other projects that use AI, AR, and robotics to design and build things.

One of these projects is Unlog, for which Kyaw connected AR with gesture recognition to build a software that takes input from the touch of a fingertip on the surface of a material, or even in the air, to map the dimensions of building components. That’s how Unlog — a towering art sculpture made from ash logs that stands on the Cornell campus — came about.

Unlog represents the possibility that structures can be built directly from a whole log, rather than having the log travel to a lumber mill to be turned into planks or two-by-fours, then shipped to a wholesaler or retailer. It’s a good representation of Kyaw’s desire to use building materials in a more sustainable way. A paper on this work, “Gestural Recognition for Feedback-Based Mixed Reality Fabrication a Case Study of the UnLog Tower,” was published by Kyaw, Leslie Lok, Lawson Spencer, and Sasa Zivkovic in the Proceedings of the 5th International Conference on Computational Design and Robotic Fabrication, January 2024.

Another system Kyaw developed integrates physics simulation, gesture recognition, and AR to design active bending structures built with bamboo poles. Gesture recognition allows users to manipulate digital bamboo modules in AR, and the physics simulation is integrated to visualize how the bamboo bends and where to attach the bamboo poles in ways that create a stable structure. This work appeared in the Proceedings of the 41st Education and Research in Computer Aided Architectural Design in Europe, August 2023, as “Active Bending in Physics-Based Mixed Reality: The Design and Fabrication of a Reconfigurable Modular Bamboo System.”

Kyaw pitched a similar idea using bamboo modules to create deployable structures last year to MITdesignX, an MIT MAD program that selects promising startups and provides coaching and funding to launch them. Kyaw has since founded BendShelters to build the prefabricated, modular bamboo shelters and community spaces for refugees and displaced persons in Myanmar, his home country.

“Where I grew up, in Myanmar, I’ve seen a lot of day-to-day effects of climate change and extreme poverty,” Kyaw says. “There’s a huge refugee crisis in the country, and I want to think about how I can contribute back to my community.”

His work with BendShelters has been recognized by MIT Sandbox, PKG Social Innovation Challenge, and the Amazon Robotics’ Prize for Social Good.

At MIT, Kyaw is collaborating with Professor Neil Gershenfeld, director of the Center for Bits and Atoms, and PhD student Miana Smith to use speech recognition, 3D generative AI, and robotic arms to create a workflow that can build objects in an accessible, on-demand, and sustainable way. Kyaw holds bachelor’s degrees in architecture and computer science from Cornell. Last year, he was awarded an SJA Fellowship from the Steve Jobs Archive, which provides funding for projects at the intersection of technology and the arts. 

“I enjoy exploring different kinds of technologies to design and make things,” Kyaw says. “Being part of MAD has made me think about how all my work connects, and helped clarify my intentions. My research vision is to design and develop systems and products that enable natural interactions between humans, machines, and the world around us.” 


New chip tests cooling solutions for stacked microelectronics

Preventing 3D integrated circuits from overheating is key to enabling their widespread use.


As demand grows for more powerful and efficient microelectronics systems, industry is turning to 3D integration — stacking chips on top of each other. This vertically layered architecture could allow high-performance processors, like those used for artificial intelligence, to be packaged closely with other highly specialized chips for communication or imaging. But technologists everywhere face a major challenge: how to prevent these stacks from overheating.

Now, MIT Lincoln Laboratory has developed a specialized chip to test and validate cooling solutions for packaged chip stacks. The chip dissipates extremely high power, mimicking high-performance logic chips, to generate heat through the silicon layer and in localized hot spots. Then, as cooling technologies are applied to the packaged stack, the chip measures temperature changes. When sandwiched in a stack, the chip will allow researchers to study how heat moves through stack layers and benchmark progress in keeping them cool. 

"If you have just a single chip, you can cool it from above or below. But if you start stacking several chips on top of each other, the heat has nowhere to escape. No cooling methods exist today that allow industry to stack multiples of these really high-performance chips," says Chenson Chen, who led the development of the chip with Ryan Keech, both of the laboratory’s Advanced Materials and Microsystems Group.

The benchmarking chip is now being used at HRL Laboratories, a research and development company co-owned by Boeing and General Motors, as they develop cooling systems for 3D heterogenous integrated (3DHI) systems. Heterogenous integration refers to the stacking of silicon chips with non-silicon chips, such as III-V semiconductors used in radio-frequency (RF) systems.   

"RF components can get very hot and run at very high powers — it adds an extra layer of complexity to 3D integration, which is why having this testing capability is so needed," Keech says.

The Defense Advanced Research Projects Agency (DARPA) funded the laboratory's development of the benchmarking chip to support the HRL program. All of this research stems from DARPA's Miniature Integrated Thermal Management Systems for 3D Heterogeneous Integration (Minitherms3D) program.

For the Department of Defense, 3DHI opens new opportunities for critical systems. For example, 3DHI could increase the range of radar and communication systems, enable the integration of advanced sensors on small platforms such as uncrewed aerial vehicles, or allow artificial intelligence data to be processed directly in fielded systems instead of remote data centers.

The test chip was developed through collaboration between circuit designers, electrical testing experts, and technicians in the laboratory's Microelectronics Laboratory. 

The chip serves two functions: generating heat and sensing temperature. To generate heat, the team designed circuits that could operate at very high power densities, in the kilowatts-per-square-centimeter range, comparable to the projected power demands of high-performance chips today and into the future. They also replicated the layout of circuits in those chips, allowing the test chip to serve as a realistic stand-in. 

"We adapted our existing silicon technology to essentially design chip-scale heaters," says Chen, who brings years of complex integration and chip design experience to the program. In the 2000s, he helped the laboratory pioneer the fabrication of two- and three-tier integrated circuits, leading early development of 3D integration.

The chip's heaters emulate both the background levels of heat within a stack and localized hot spots. Hot spots often occur in the most buried and inaccessible areas of a chip stack, making it difficult for 3D-chip developers to assess whether cooling schemes, such as microchannels delivering cold liquid, are reaching those spots and are effective enough.

That's where temperature-sensing elements come in. The chip is distributed with what Chen likens to "tiny thermometers" that read out the temperature in multiple locations across the chip as coolants are applied.

These thermometers are actually diodes, or switches that allow current to flow through a circuit as voltage is applied. As the diodes heat up, the current-to-voltage ratio changes. "We're able to check a diode's performance and know that it's 200 degrees C, or 100 degrees C, or 50 degrees C, for example," Keech says. "We thought creatively about how devices could fail from overheating, and then used those same properties to design useful measurement tools."

Chen and Keech — along with other design, fabrication, and electrical test experts across the laboratory — are now collaborating with HRL Laboratories researchers as they couple the chip with novel cooling technologies, and integrate those technologies into a 3DHI stack that could boost RF signal power. "We need to cool the heat equivalent of more than 190 laptop CPUs [central processing units], but in the size of a single CPU package," Christopher Roper, co-principal investigator at HRL, said in a recent press release announcing their program.

According to Keech, the rapid timeline for delivering the chip was a challenge overcome by teamwork through all phases of the chip's design, fabrication, test, and 3D heterogenous integration.

"Stacked architectures are considered the next frontier for microelectronics," he says. "We want to help the U.S. government get ahead in finding ways to integrate them effectively and enable the highest performance possible for these chips."

The laboratory team presented this work at the annual Government Microcircuit Applications and Critical Technology Conference (GOMACTech), held March 17-20.


Gene circuits enable more precise control of gene therapy

The circuits could help researchers develop new treatments for fragile X syndrome and other diseases caused by mutations of a single gene.


Many diseases are caused by a missing or defective copy of a single gene. For decades, scientists have been working on gene therapy treatments that could cure such diseases by delivering a new copy of the missing genes to the affected cells.

Despite those efforts, very few gene therapy treatments have been approved by the FDA. One of the challenges to developing these treatments has been achieving control over how much the new gene is expressed in cells — too little and it won’t succeed, too much and it could cause serious side effects.

To help achieve more precise control of gene therapy, MIT engineers have tuned and applied a control circuit that can keep expression levels within a target range. In human cells, they showed that they could use this method to deliver genes that could help treat diseases including fragile X syndrome, a disorder that leads to intellectual disability and other developmental problems.

“In theory, gene supplementation can solve monogenic disorders that are very diverse but have a relatively straightforward gene therapy fix if you could control the therapy well enough,” says Katie Galloway, the W. M. Keck Career Development Professor in Biomedical Engineering and Chemical Engineering and the senior author of the new study.

MIT graduate student Kasey Love is the lead author of the paper, which appears today in Cell Systems. Other authors of the paper include MIT graduate students Christopher Johnstone, Emma Peterman, and Stephanie Gaglione, and Michael Birnbaum, an associate professor of biological engineering at MIT.

Delivering genes

While gene therapy holds promise for treating a variety of diseases, including hemophilia and sickle cell anemia, only a handful of treatments have been approved so far, for an inherited retinal disease and certain blood cancers.

Most gene therapy approaches use a virus to deliver a new copy of a gene, which is then integrated into the DNA of host cells. Some cells may take up many copies of the gene, while others don’t receive any.

“Simple overexpression of that payload can result in a really wide range of expression levels in the target genes as they take up different numbers of copies of those genes or just have different expression levels,” Love says. “If it's not expressing enough, that defeats the purpose of the therapy. But on the other hand, expressing at too high levels is also a problem, as that payload can be toxic.”

To try to overcome this, scientists have experimented with different types of control circuits that constrain expression of the therapeutic gene. In this study, the MIT team decided to use a type of circuit called an incoherent feedforward loop (IFFL).

In an IFFL circuit, activation of the target gene simultaneously activates production of a molecule that suppresses gene expression. One type of molecule that can be used to achieve that suppression is microRNA — a short RNA sequence that binds to messenger RNA, preventing it from being translated into protein.

In this study, the MIT team designed an IFFL circuit, called “ComMAND” (Compact microRNA-mediated attenuator of noise and dosage), so that a microRNA strand that represses mRNA translation is encoded within the therapeutic gene. The microRNA is located within a short segment called an intron, which gets spliced out of the gene when it is transcribed into mRNA. This means that whenever the gene is turned on, both the mRNA and the microRNA that represses it are produced in roughly equal amounts.

This approach allows the researchers to control the entire ComMAND circuit with just one promoter — the DNA site where gene transcription is turned on. By swapping in promoters of different strengths, the researchers can tailor how much of the therapeutic gene will be produced.

In addition to offering tighter control, the circuit’s compact design allows it to be carried on a single delivery vehicle, such as a lentivirus or adeno-associated virus, which could improve the manufacturability of these therapies. Both of those viruses are frequently used to deliver therapeutic cargoes.

“Other people have developed microRNA based incoherent feed forward loops, but what Kasey has done is put it all on a single transcript, and she showed that this gives the best possible control when you have variable delivery to cells,” Galloway says.

Precise control

To demonstrate this system, the researchers designed ComMAND circuits that could deliver the gene FXN, which is mutated in Friedreich’s ataxia — a disorder that affects the heart and nervous system. They also delivered the gene Fmr1, whose dysfunction causes fragile X syndrome. In tests in human cells, they showed that they could tune gene expression levels to about eight times the levels normally seen in healthy cells.

Without ComMAND, gene expression was more than 50 times the normal level, which could pose safety risks. Further tests in animal models would be needed to determine the optimal levels, the researchers say.

The researchers also performed tests in rat neurons, mouse fibroblasts, and human T-cells. For those cells, they delivered a gene that encodes a fluorescent protein, so they could easily measure the gene expression levels. In those cells, too, the researchers found that they could control gene expression levels more precisely than without the circuit.

The researchers now plan to study whether they could use this approach to deliver genes at a level that would restore normal function and reverse signs of disease, either in cultured cells or animal models.

“There's probably some tuning that would need to be done to the expression levels, but we understand some of those design principles, so if we needed to tune the levels up or down, I think we'd know potentially how to go about that,” Love says.

Other diseases that this approach could be applied to include Rett syndrome, muscular dystrophy and spinal muscular atrophy, the researchers say.

“The challenge with a lot of those is they're also rare diseases, so you don't have large patient populations,” Galloway says. “We're trying to build out these tools that are robust so people can figure out how to do the tuning, because the patient populations are so small and there isn't a lot of funding for solving some of these disorders.”

The research was funded by the National Institute of General Medical Sciences, the National Science Foundation, the Institute for Collaborative Biotechnologies, and the Air Force Research Laboratory. 


Novel method detects microbial contamination in cell cultures

Ultraviolet light “fingerprints” on cell cultures and machine learning can provide a definitive yes/no contamination assessment within 30 minutes.


Researchers from the Critical Analytics for Manufacturing Personalized-Medicine (CAMP) interdisciplinary research group of the Singapore-MIT Alliance for Research and Technology (SMART), MIT’s research enterprise in Singapore, in collaboration with MIT, A*STAR Skin Research Labs, and the National University of Singapore, have developed a novel method that can quickly and automatically detect and monitor microbial contamination in cell therapy products (CTPs) early on during the manufacturing process. By measuring ultraviolet light absorbance of cell culture fluids and using machine learning to recognize light absorption patterns associated with microbial contamination, this preliminary testing method aims to reduce the overall time taken for sterility testing and, subsequently, the time patients need to wait for CTP doses. This is especially crucial where timely administration of treatments can be life-saving for terminally ill patients.
 
Cell therapy represents a promising new frontier in medicine, especially in treating diseases such as cancers, inflammatory diseases, and chronic degenerative disorders by manipulating or replacing cells to restore function or fight disease. However, a major challenge in CTP manufacturing is quickly and effectively ensuring that cells are free from contamination before being administered to patients.
 
Existing sterility testing methods, based on microbiological methods,  are labor-intensive and require up to 14 days to detect contamination, which could adversely affect critically ill patients who need immediate treatment. While advanced techniques such as rapid microbiological methods (RMMs) can reduce the testing period to seven days, they still require complex processes such as cell extraction and growth enrichment mediums, and they are highly dependent on skilled workers for procedures such as sample extraction, measurement, and analysis. This creates an urgent need for new methods that offer quicker outcomes without compromising the quality of CTPs, meet the patient-use timeline, and use a simple workflow that does not require additional preparation.
 
In a paper titled “Machine learning aided UV absorbance spectroscopy for microbial contamination in cell therapy products,” published in the journal Scientific Reports, SMART CAMP researchers described how they combined UV absorbance spectroscopy to develop a machine learning-aided method for label-free, noninvasive, and real-time detection of cell contamination during the early stages of manufacturing.
 
This method offers significant advantages over both traditional sterility tests and RMMs, as it eliminates the need for staining of cells to identify labelled organisms, avoids the invasive process of cell extraction, and delivers results in under half-an-hour. It provides an intuitive, rapid “yes/no” contamination assessment, facilitating automation of cell culture sampling with a simple workflow. Furthermore, the developed method does not require specialized equipment, resulting in lower costs.
 
“This rapid, label-free method is designed to be a preliminary step in the CTP manufacturing process as a form of continuous safety testing, which allows users to detect contamination early and implement timely corrective actions, including the use of RMMs only when possible contamination is detected. This approach saves costs, optimizes resource allocation, and ultimately accelerates the overall manufacturing timeline,” says Shruthi Pandi Chelvam, senior research engineer at SMART CAMP and first author of the paper.
 
“Traditionally, cell therapy manufacturing is labor-intensive and subject to operator variability. By introducing automation and machine learning, we hope to streamline cell therapy manufacturing and reduce the risk of contamination. Specifically, our method supports automated cell culture sampling at designated intervals to check for contamination, which reduces manual tasks such as sample extraction, measurement, and analysis. This enables cell cultures to be monitored continuously and contamination to be detected at early stages,” says Rajeev Ram, the Clarence J. LeBel Professor in Electrical Engineering and Computer Science at MIT, a principal investigator at SMART CAMP, and the corresponding author of the paper.
 
Moving forward, future research will focus on broadening the application of the method to encompass a wider range of microbial contaminants, specifically those representative of current good manufacturing practices environments and previously identified CTP contaminants. Additionally, the model’s robustness can be tested across more cell types apart from MSCs. Beyond cell therapy manufacturing, this method can also be applied to the food and beverage industry as part of microbial quality control testing to ensure food products meet safety standards.

The chemistry of creativity

Senior Madison Wang blends science, history, and art to probe how the world works and the tools we use to explore and understand it.


Senior Madison Wang, a double major in creative writing and chemistry, developed her passion for writing in middle school. Her interest in chemistry fit nicely alongside her commitment to producing engaging narratives. 

Wang believes that world-building in stories supported by science and research can make for a more immersive reader experience.

“In science and in writing, you have to tell an effective story,” she says. “People respond well to stories.”  

A native of Buffalo, New York, Wang applied early action for admission to MIT and learned quickly that the Institute was where she wanted to be. “It was a really good fit,” she says. “There was positive energy and vibes, and I had a great feeling overall.”

The power of science and good storytelling

“Chemistry is practical, complex, and interesting,” says Wang. “It’s about quantifying natural laws and understanding how reality works.”

Chemistry and writing both help us “see the world’s irregularity,” she continues. Together, they can erase the artificial and arbitrary line separating one from the other and work in concert to tell a more complete story about the world, the ways in which we participate in building it, and how people and objects exist in and move through it. 

“Understanding magnetism, material properties, and believing in the power of magic in a good story … these are why we’re drawn to explore,” she says. “Chemistry describes why things are the way they are, and I use it for world-building in my creative writing.”

Wang lauds MIT’s creative writing program and cites a course she took with Comparative Media Studies/Writing Professor and Pulitzer Prize winner Junot Díaz as an affirmation of her choice. Seeing and understanding the world through the eyes of a scientist — its building blocks, the ways the pieces fit and function together — help explain her passion for chemistry, especially inorganic and physical chemistry.

Wang cites the work of authors like Sam Kean and Knight Science Journalism Program Director Deborah Blum as part of her inspiration to study science. The books “The Disappearing Spoon” by Kean and “The Poisoner’s Handbook” by Blum “both present historical perspectives, opting for a story style to discuss the events and people involved,” she says. “They each put a lot of work into bridging the gap between what can sometimes be sterile science and an effective narrative that gets people to care about why the science matters.”

Genres like fantasy and science fiction are complementary, according to Wang. “Constructing an effective world means ensuring readers understand characters’ motivations — the ‘why’ — and ensuring it makes sense,” she says. “It’s also important to show how actions and their consequences influence and motivate characters.” 

As she explores the world’s building blocks inside and outside the classroom, Wang works to navigate multiple genres in her writing, as with her studies in chemistry. “I like romance and horror, too,” she says. “I have gripes with committing to a single genre, so I just take whatever I like from each and put them in my stories.”

In chemistry, Wang favors an environment in which scientists can regularly test their ideas. “It’s important to ground chemistry in the real world to create connections for students,” she argues. Advancements in the field have occurred, she notes, because scientists could exit the realm of theory and apply ideas practically.

“Fritz Haber’s work on ammonia synthesis revolutionized approaches to food supply chains,” she says, referring to the German chemist and Nobel laureate. “Converting nitrogen and hydrogen gas to ammonia for fertilizer marked a dramatic shift in how farming could work.” This kind of work could only result from the consistent, controlled, practical application of the theories scientists consider in laboratory environments.

A future built on collaboration and cooperation

Watching the world change dramatically and seeing humanity struggle to grapple with the implications of phenomena like climate change, political unrest, and shifting alliances, Wang emphasizes the importance of deconstructing silos in academia and the workplace. Technology can be a tool for harm, she notes, so inviting more people inside previously segregated spaces helps everyone.

Criticism in both chemistry and writing, Wang believes, are valuable tools for continuous improvement. Effective communication, explaining complex concepts, and partnering to develop long-term solutions are invaluable when working at the intersection of history, art, and science. In writing, Wang says, criticism can help define areas to improve writers’ stories and shape interesting ideas.

“We’ve seen the positive results that can occur with effective science writing, which requires rigor and fact-checking,” she says. “MIT’s cross-disciplinary approach to our studies, alongside feedback from teachers and peers, is a great set of tools to carry with us regardless of where we are.”

Wang explores connections between science and stories in her leisure time, too. “I’m a member of MIT’s Anime Club and I enjoy participating in MIT’s Sport Taekwondo Club,” she says. The competitive aspect in tae kwon do allows for her to feed her competitive drive and gets her out of her head. Her participation in DAAMIT (Digital Art and Animation at MIT) creates connections with different groups of people and gives her ideas she can use to tell better stories. “It’s fascinating exploring others’ minds,” she says.

Wang argues that there’s a false divide between science and the humanities and wants the work she does after graduation to bridge that divide. “Writing and learning about science can help,” she asserts. “Fields like conservation and history allow for continued exploration of that intersection.”

Ultimately, Wang believes it’s important to examine narratives carefully and to question notions of science’s inherent superiority over humanities fields. “The humanities and science have equal value,” she says.


Six from MIT elected to American Academy of Arts and Sciences for 2025

The prestigious honor society announces nearly 250 new members.


Six MIT faculty members are among the nearly 250 leaders from academia, the arts, industry, public policy, and research elected to the American Academy of Arts and Sciences, the academy announced April 23.

One of the nation’s most prestigious honorary societies, the academy is also a leading center for independent policy research. Members contribute to academy publications, as well as studies of science and technology policy, energy and global security, social policy and American institutions, the humanities and culture, and education.

Those elected from MIT in 2025 are:

“These new members’ accomplishments speak volumes about the human capacity for discovery, creativity, leadership, and persistence. They are a stellar testament to the power of knowledge to broaden our horizons and deepen our understanding,” says Academy President Laurie L. Patton. “We invite every new member to celebrate their achievement and join the Academy in our work to promote the common good.”

Since its founding in 1780, the academy has elected leading thinkers from each generation, including George Washington and Benjamin Franklin in the 18th century, Maria Mitchell and Daniel Webster in the 19th century, and Toni Morrison and Albert Einstein in the 20th century. The current membership includes more than 250 Nobel and Pulitzer Prize winners.


Robotic system zeroes in on objects most relevant for helping humans

A new approach could enable intuitive robotic helpers for household, workplace, and warehouse settings.


For a robot, the real world is a lot to take in. Making sense of every data point in a scene can take a huge amount of computational effort and time. Using that information to then decide how to best help a human is an even thornier exercise.

Now, MIT roboticists have a way to cut through the data noise, to help robots focus on the features in a scene that are most relevant for assisting humans.

Their approach, which they aptly dub “Relevance,” enables a robot to use cues in a scene, such as audio and visual information, to determine a human’s objective and then quickly identify the objects that are most likely to be relevant in fulfilling that objective. The robot then carries out a set of maneuvers to safely offer the relevant objects or actions to the human.

The researchers demonstrated the approach with an experiment that simulated a conference breakfast buffet. They set up a table with various fruits, drinks, snacks, and tableware, along with a robotic arm outfitted with a microphone and camera. Applying the new Relevance approach, they showed that the robot was able to correctly identify a human’s objective and appropriately assist them in different scenarios.

In one case, the robot took in visual cues of a human reaching for a can of prepared coffee, and quickly handed the person milk and a stir stick. In another scenario, the robot picked up on a conversation between two people talking about coffee, and offered them a can of coffee and creamer.

Overall, the robot was able to predict a human’s objective with 90 percent accuracy and to identify relevant objects with 96 percent accuracy. The method also improved a robot’s safety, reducing the number of collisions by more than 60 percent, compared to carrying out the same tasks without applying the new method.

“This approach of enabling relevance could make it much easier for a robot to interact with humans,” says Kamal Youcef-Toumi, professor of mechanical engineering at MIT. “A robot wouldn’t have to ask a human so many questions about what they need. It would just actively take information from the scene to figure out how to help.”

Youcef-Toumi’s group is exploring how robots programmed with Relevance can help in smart manufacturing and warehouse settings, where they envision robots working alongside and intuitively assisting humans.

Youcef-Toumi, along with graduate students Xiaotong Zhang and Dingcheng Huang, will present their new method at the IEEE International Conference on Robotics and Automation (ICRA) in May. The work builds on another paper presented at ICRA the previous year.

Finding focus

The team’s approach is inspired by our own ability to gauge what’s relevant in daily life. Humans can filter out distractions and focus on what’s important, thanks to a region of the brain known as the Reticular Activating System (RAS). The RAS is a bundle of neurons in the brainstem that acts subconsciously to prune away unnecessary stimuli, so that a person can consciously perceive the relevant stimuli. The RAS helps to prevent sensory overload, keeping us, for example, from fixating on every single item on a kitchen counter, and instead helping us to focus on pouring a cup of coffee.

“The amazing thing is, these groups of neurons filter everything that is not important, and then it has the brain focus on what is relevant at the time,” Youcef-Toumi explains. “That’s basically what our proposition is.”

He and his team developed a robotic system that broadly mimics the RAS’s ability to selectively process and filter information. The approach consists of four main phases. The first is a watch-and-learn “perception” stage, during which a robot takes in audio and visual cues, for instance from a microphone and camera, that are continuously fed into an AI “toolkit.” This toolkit can include a large language model (LLM) that processes audio conversations to identify keywords and phrases, and various algorithms that detect and classify objects, humans, physical actions, and task objectives. The AI toolkit is designed to run continuously in the background, similarly to the subconscious filtering that the brain’s RAS performs.

The second stage is a “trigger check” phase, which is a periodic check that the system performs to assess if anything important is happening, such as whether a human is present or not. If a human has stepped into the environment, the system’s third phase will kick in. This phase is the heart of the team’s system, which acts to determine the features in the environment that are most likely relevant to assist the human.

To establish relevance, the researchers developed an algorithm that takes in real-time predictions made by the AI toolkit. For instance, the toolkit’s LLM may pick up the keyword “coffee,” and an action-classifying algorithm may label a person reaching for a cup as having the objective of “making coffee.” The team’s Relevance method would factor in this information to first determine the “class” of objects that have the highest probability of being relevant to the objective of “making coffee.” This might automatically filter out classes such as “fruits” and “snacks,” in favor of “cups” and “creamers.” The algorithm would then further filter within the relevant classes to determine the most relevant “elements.” For instance, based on visual cues of the environment, the system may label a cup closest to a person as more relevant — and helpful — than a cup that is farther away.

In the fourth and final phase, the robot would then take the identified relevant objects and plan a path to physically access and offer the objects to the human.

Helper mode

The researchers tested the new system in experiments that simulate a conference breakfast buffet. They chose this scenario based on the publicly available Breakfast Actions Dataset, which comprises videos and images of typical activities that people perform during breakfast time, such as preparing coffee, cooking pancakes, making cereal, and frying eggs. Actions in each video and image are labeled, along with the overall objective (frying eggs, versus making coffee).

Using this dataset, the team tested various algorithms in their AI toolkit, such that, when receiving actions of a person in a new scene, the algorithms could accurately label and classify the human tasks and objectives, and the associated relevant objects.

In their experiments, they set up a robotic arm and gripper and instructed the system to assist humans as they approached a table filled with various drinks, snacks, and tableware. They found that when no humans were present, the robot’s AI toolkit operated continuously in the background, labeling and classifying objects on the table.

When, during a trigger check, the robot detected a human, it snapped to attention, turning on its Relevance phase and quickly identifying objects in the scene that were most likely to be relevant, based on the human’s objective, which was determined by the AI toolkit.

“Relevance can guide the robot to generate seamless, intelligent, safe, and efficient assistance in a highly dynamic environment,” says co-author Zhang.

Going forward, the team hopes to apply the system to scenarios that resemble workplace and warehouse environments, as well as to other tasks and objectives typically performed in household settings.

“I would want to test this system in my home to see, for instance, if I’m reading the paper, maybe it can bring me coffee. If I’m doing laundry, it can bring me a laundry pod. If I’m doing repair, it can bring me a screwdriver,” Zhang says. “Our vision is to enable human-robot interactions that can be much more natural and fluent.”

This research was made possible by the support and partnership of King Abdulaziz City for Science and Technology (KACST) through the Center for Complex Engineering Systems at MIT and KACST.


Wearable device tracks individual cells in the bloodstream in real time

The technology, which achieves single-cell resolution, could help in continuous, noninvasive patient assessment to guide medical treatments.


Researchers at MIT have developed a noninvasive medical monitoring device powerful enough to detect single cells within blood vessels, yet small enough to wear like a wristwatch. One important aspect of this wearable device is that it can enable continuous monitoring of circulating cells in the human body.

The technology was presented online on March 3 by the journal npj Biosensing and is forthcoming in the journal’s print version.

The device — named CircTrek — was developed by researchers in the Nano-Cybernetic Biotrek research group, led by Deblina Sarkar, assistant professor at MIT and AT&T Career Development Chair at the MIT Media Lab. This technology could greatly facilitate early diagnosis of disease, detection of disease relapse, assessment of infection risk, and determination of whether a disease treatment is working, among other medical processes.

Whereas traditional blood tests are like a snapshot of a patient’s condition, CircTrek was designed to present real-time assessment, referred to in the npj Biosensing paper as having been “an unmet goal to date.” A different technology that offers monitoring of cells in the bloodstream with some continuity, in vivo flow cytometry, “requires a room-sized microscope, and patients need to be there for a long time,” says Kyuho Jang, a PhD student in Sarkar’s lab.

CircTrek, on the other hand, which is equipped with an onboard Wi-Fi module, could even monitor a patient’s circulating cells at home and send that information to the patient’s doctor or care team.

“CircTrek offers a path to harnessing previously inaccessible information, enabling timely treatments, and supporting accurate clinical decisions with real-time data,” says Sarkar. “Existing technologies provide monitoring that is not continuous, which can lead to missing critical treatment windows. We overcome this challenge with CircTrek.”

The device works by directing a focused laser beam to stimulate cells beneath the skin that have been fluorescently labeled. Such labeling can be accomplished with a number of methods, including applying antibody-based fluorescent dyes to the cells of interest or genetically modifying such cells so that they express fluorescent proteins.

For example, a patient receiving CAR T cell therapy, in which immune cells are collected and modified in a lab to fight cancer (or, experimentally, to combat HIV or Covid-19), could have those cells labeled at the same time with fluorescent dyes or genetic modification so the cells express fluorescent proteins. Importantly, cells of interest can also be labeled with in vivo labeling methods approved in humans. Once the cells are labeled and circulating in the bloodstream, CircTrek is designed to apply laser pulses to enhance and detect the cells’ fluorescent signal while an arrangement of filters minimizes low-frequency noise such as heartbeats.

“We optimized the optomechanical parts to reduce noise significantly and only capture the signal from the fluorescent cells,” says Jang.

Detecting the labeled CAR T cells, CircTrek could assess whether the cell therapy treatment is working. As an example, persistence of the CAR T cells in the blood after treatment is associated with better outcomes in patients with B-cell lymphoma.

To keep CircTrek small and wearable, the researchers were able to miniaturize the components of the device, such as the circuit that drives the high-intensity laser source and keeps the power level of the laser stable to avoid false readings.

The sensor that detects the fluorescent signals of the labeled cells is also minute, and yet it is capable of detecting a quantity of light equivalent to a single photon, Jang says.

The device’s subcircuits, including the laser driver and the noise filters, were custom-designed to fit on a circuit board measuring just 42 mm by 35 mm, allowing CircTrek to be approximately the same size as a smartwatch.

CircTrek was tested on an in vitro configuration that simulated blood flow beneath human skin, and its single-cell detection capabilities were verified through manual counting with a high-resolution confocal microscope. For the in vitro testing, a fluorescent dye called Cyanine5.5 was employed. That particular dye was selected because it reaches peak activation at wavelengths within skin tissue’s optical window, or the range of wavelengths that can penetrate the skin with minimal scattering.

The safety of the device, particularly the temperature increase on experimental skin tissue caused by the laser, was also investigated. An increase of 1.51 degrees Celsius at the skin surface was determined to be well below heating that would damage tissue, with enough of a margin that even increasing the device’s area of detection, and its power, in order to ensure the observation of at least one blood vessel could be safely permitted.

While clinical translation of CircTrek will require further steps, Jang says its parameters can be modified to broaden its potential, so that doctors could be provided with critical information on nearly any patient.


New electronic “skin” could enable lightweight night-vision glasses

MIT engineers developed ultrathin electronic films that sense heat and other signals, and could reduce the bulk of conventional goggles and scopes.


MIT engineers have developed a technique to grow and peel ultrathin “skins” of electronic material. The method could pave the way for new classes of electronic devices, such as ultrathin wearable sensors, flexible transistors and computing elements, and highly sensitive and compact imaging devices. 

As a demonstration, the team fabricated a thin membrane of pyroelectric material — a class of heat-sensing material that produces an electric current in response to changes in temperature. The thinner the pyroelectric material, the better it is at sensing subtle thermal variations.

With their new method, the team fabricated the thinnest pyroelectric membrane yet, measuring 10 nanometers thick, and demonstrated that the film is highly sensitive to heat and radiation across the far-infrared spectrum.

The newly developed film could enable lighter, more portable, and highly accurate far-infrared (IR) sensing devices, with potential applications for night-vision eyewear and autonomous driving in foggy conditions. Current state-of-the-art far-IR sensors require bulky cooling elements. In contrast, the new pyroelectric thin film requires no cooling and is sensitive to much smaller changes in temperature. The researchers are exploring ways to incorporate the film into lighter, higher-precision night-vision glasses.

“This film considerably reduces weight and cost, making it lightweight, portable, and easier to integrate,” Xinyuan Zhang, a graduate student in MIT’s Department of Materials Science and Engineering (DMSE). “For example, it could be directly worn on glasses.”

The heat-sensing film could also have applications in environmental and biological sensing, as well as imaging of astrophysical phenomena that emit far-infrared radiation.

What’s more, the new lift-off technique is generalizable beyond pyroelectric materials. The researchers plan to apply the method to make other ultrathin, high-performance semiconducting films.

Their results are reported today in a paper appearing in the journal Nature. The study’s MIT co-authors are first author Xinyuan Zhang, Sangho Lee, Min-Kyu Song, Haihui Lan, Jun Min Suh, Jung-El Ryu, Yanjie Shao, Xudong Zheng, Ne Myo Han, and Jeehwan Kim, associate professor of mechanical engineering and of materials science and engineering, along with researchers at the University Wisconsin at Madison led by Professor Chang-Beom Eom and authors from multiple other institutions.

Chemical peel

Kim’s group at MIT is finding new ways to make smaller, thinner, and more flexible electronics. They envision that such ultrathin computing “skins” can be incorporated into everything from smart contact lenses and wearable sensing fabrics to stretchy solar cells and bendable displays. To realize such devices, Kim and his colleagues have been experimenting with methods to grow, peel, and stack semiconducting elements, to fabricate ultrathin, multifunctional electronic thin-film membranes.

One method that Kim has pioneered is “remote epitaxy” — a technique where semiconducting materials are grown on a single-crystalline substrate, with an ultrathin layer of graphene in between. The substrate’s crystal structure serves as a scaffold along which the new material can grow. The graphene acts as a nonstick layer, similar to Teflon, making it easy for researchers to peel off the new film and transfer it onto flexible and stacked electronic devices. After peeling off the new film, the underlying substrate can be reused to make additional thin films.

Kim has applied remote epitaxy to fabricate thin films with various characteristics. In trying different combinations of semiconducting elements, the researchers happened to notice that a certain pyroelectric material, called PMN-PT, did not require an intermediate layer assist in order to separate from its substrate. Just by growing PMN-PT directly on a single-crystalline substrate, the researchers could then remove the grown film, with no rips or tears to its delicate lattice.

“It worked surprisingly well,” Zhang says. “We found the peeled film is atomically smooth.”

Lattice lift-off

In their new study, the MIT and UW Madison researchers took a closer look at the process and discovered that the key to the material’s easy-peel property was lead. As part of its chemical structure, the team, along with colleagues at the Rensselaer Polytechnic Institute, discovered that the pyroelectric film contains an orderly arrangement of lead atoms that have a large “electron affinity,” meaning that lead attracts electrons and prevents the charge carriers from traveling and connecting to another materials such as an underlying substrate. The lead acts as tiny nonstick units, allowing the material as a whole to peel away, perfectly intact.

The team ran with the realization and fabricated multiple ultrathin films of PMN-PT, each about 10 nanometers thin. They peeled off pyroelectric films and transfered them onto a small chip to form an array of 100 ultrathin heat-sensing pixels, each about 60 square microns (about .006 square centimeters). They exposed the films to ever-slighter changes in temperature and found the pixels were highly sensitive to small changes across the far-infrared spectrum.

The sensitivity of the pyroelectric array is comparable to that of state-of-the-art night-vision devices. These devices are currently based on photodetector materials, in which a change in temperature induces the material’s electrons to jump in energy and briefly cross an energy “band gap,” before settling back into their ground state. This electron jump serves as an electrical signal of the temperature change. However, this signal can be affected by noise in the environment, and to prevent such effects, photodetectors have to also include cooling devices that bring the instruments down to liquid nitrogen temperatures.

Current night-vision goggles and scopes are heavy and bulky. With the group’s new pyroelectric-based approach, NVDs could have the same sensitivity without the cooling weight.

The researchers also found that the films were sensitive beyond the range of current night-vision devices and could respond to wavelengths across the entire infrared spectrum. This suggests that the films could be incorporated into small, lightweight, and portable devices for various applications that require different infrared regions. For instance, when integrated into autonomous vehicle platforms, the films could enable cars to “see” pedestrians and vehicles in complete darkness or in foggy and rainy conditions. 

The film could also be used in gas sensors for real-time and on-site environmental monitoring, helping detect pollutants. In electronics, they could monitor heat changes in semiconductor chips to catch early signs of malfunctioning elements.

The team says the new lift-off method can be generalized to materials that may not themselves contain lead. In those cases, the researchers suspect that they can infuse Teflon-like lead atoms into the underlying substrate to induce a similar peel-off effect. For now, the team is actively working toward incorporating the pyroelectric films into a functional night-vision system.

“We envision that our ultrathin films could be made into high-performance night-vision goggles, considering its broad-spectrum infrared sensitivity at room-temperature, which allows for a lightweight design without a cooling system,” Zhang says. “To turn this into a night-vision system, a functional device array should be integrated with readout circuitry. Furthermore, testing in varied environmental conditions is essential for practical applications.”

This work was supported by the U.S. Air Force Office of Scientific Research.


New model predicts a chemical reaction’s point of no return

Chemists could use this quick computational method to design more efficient reactions that yield useful compounds, from fuels to pharmaceuticals.


When chemists design new chemical reactions, one useful piece of information involves the reaction’s transition state — the point of no return from which a reaction must proceed.

This information allows chemists to try to produce the right conditions that will allow the desired reaction to occur. However, current methods for predicting the transition state and the path that a chemical reaction will take are complicated and require a huge amount of computational power.

MIT researchers have now developed a machine-learning model that can make these predictions in less than a second, with high accuracy. Their model could make it easier for chemists to design chemical reactions that could generate a variety of useful compounds, such as pharmaceuticals or fuels.

“We’d like to be able to ultimately design processes to take abundant natural resources and turn them into molecules that we need, such as materials and therapeutic drugs. Computational chemistry is really important for figuring out how to design more sustainable processes to get us from reactants to products,” says Heather Kulik, the Lammot du Pont Professor of Chemical Engineering, a professor of chemistry, and the senior author of the new study.

Former MIT graduate student Chenru Duan PhD ’22, who is now at Deep Principle; former Georgia Tech graduate student Guan-Horng Liu, who is now at Meta; and Cornell University graduate student Yuanqi Du are the lead authors of the paper, which appears today in Nature Machine Intelligence.

Better estimates

For any given chemical reaction to occur, it must go through a transition state, which takes place when it reaches the energy threshold needed for the reaction to proceed. These transition states are so fleeting that they’re nearly impossible to observe experimentally.

As an alternative, researchers can calculate the structures of transition states using techniques based on quantum chemistry. However, that process requires a great deal of computing power and can take hours or days to calculate a single transition state.

“Ideally, we’d like to be able to use computational chemistry to design more sustainable processes, but this computation in itself is a huge use of energy and resources in finding these transition states,” Kulik says.

In 2023, Kulik, Duan, and others reported on a machine-learning strategy that they developed to predict the transition states of reactions. This strategy is faster than using quantum chemistry techniques, but still slower than what would be ideal because it requires the model to generate about 40 structures, then run those predictions through a “confidence model” to predict which states were most likely to occur.

One reason why that model needs to be run so many times is that it uses randomly generated guesses for the starting point of the transition state structure, then performs dozens of calculations until it reaches its final, best guess. These randomly generated starting points may be very far from the actual transition state, which is why so many steps are needed.

The researchers’ new model, React-OT, described in the Nature Machine Intelligence paper, uses a different strategy. In this work, the researchers trained their model to begin from an estimate of the transition state generated by linear interpolation — a technique that estimates each atom’s position by moving it halfway between its position in the reactants and in the products, in three-dimensional space.

“A linear guess is a good starting point for approximating where that transition state will end up,” Kulik says. “What the model’s doing is starting from a much better initial guess than just a completely random guess, as in the prior work.”

Because of this, it takes the model fewer steps and less time to generate a prediction. In the new study, the researchers showed that their model could make predictions with only about five steps, taking about 0.4 seconds. These predictions don’t need to be fed through a confidence model, and they are about 25 percent more accurate than the predictions generated by the previous model.

“That really makes React-OT a practical model that we can directly integrate to the existing computational workflow in high-throughput screening to generate optimal transition state structures,” Duan says.

“A wide array of chemistry”

To create React-OT, the researchers trained it on the same dataset that they used to train their older model. These data contain structures of reactants, products, and transition states, calculated using quantum chemistry methods, for 9,000 different chemical reactions, mostly involving small organic or inorganic molecules.

Once trained, the model performed well on other reactions from this set, which had been held out of the training data. It also performed well on other types of reactions that it hadn’t been trained on, and could make accurate predictions involving reactions with larger reactants, which often have side chains that aren’t directly involved in the reaction.

“This is important because there are a lot of polymerization reactions where you have a big macromolecule, but the reaction is occurring in just one part. Having a model that generalizes across different system sizes means that it can tackle a wide array of chemistry,” Kulik says.

The researchers are now working on training the model so that it can predict transition states for reactions between molecules that include additional elements, including sulfur, phosphorus, chlorine, silicon, and lithium.

“To quickly predict transition state structures is key to all chemical understanding,” says Markus Reiher, a professor of theoretical chemistry at ETH Zurich, who was not involved in the study. “The new approach presented in the paper could very much accelerate our search and optimization processes, bringing us faster to our final result. As a consequence, also less energy will be consumed in these high-performance computing campaigns. Any progress that accelerates this optimization benefits all sorts of computational chemical research.”

The MIT team hopes that other scientists will make use of their approach in designing their own reactions, and have created an app for that purpose.

“Whenever you have a reactant and product, you can put them into the model and it will generate the transition state, from which you can estimate the energy barrier of your intended reaction, and see how likely it is to occur,” Duan says.

The research was funded by the U.S. Army Research Office, the U.S. Department of Defense Basic Research Office, the U.S. Air Force Office of Scientific Research, the National Science Foundation, and the U.S. Office of Naval Research.


MIT engineers print synthetic “metamaterials” that are both strong and stretchy

A new method could enable stretchable ceramics, glass, and metals, for tear-proof textiles or stretchy semiconductors.


In metamaterials design, the name of the game has long been “stronger is better.”

Metamaterials are synthetic materials with microscopic structures that give the overall material exceptional properties. A huge focus has been in designing metamaterials that are stronger and stiffer than their conventional counterparts. But there’s a trade-off: The stiffer a material, the less flexible it is.

MIT engineers have now found a way to fabricate a metamaterial that is both strong and stretchy. The base material is typically highly rigid and brittle, but it is printed in precise, intricate patterns that form a structure that is both strong and flexible.

The key to the new material’s dual properties is a combination of stiff microscopic struts and a softer woven architecture. This microscopic “double network,” which is printed using a plexiglass-like polymer, produced a material that could stretch over four times its size without fully breaking. In comparison, the polymer in other forms has little to no stretch and shatters easily once cracked.

Two animations of material stretching and breaking apart, the right taking longer to separate

The researchers say the new double-network design can be applied to other materials, for instance to fabricate stretchy ceramics, glass, and metals. Such tough yet bendy materials could be made into tear-resistant textiles, flexible semiconductors, electronic chip packaging, and durable yet compliant scaffolds on which to grow cells for tissue repair.

“We are opening up this new territory for metamaterials,” says Carlos Portela, the Robert N. Noyce Career Development Associate Professor at MIT. “You could print a double-network metal or ceramic, and you could get a lot of these benefits, in that it would take more energy to break them, and they would be significantly more stretchable.”

Portela and his colleagues report their findings today in the journal Nature Materials. His MIT co-authors include first author James Utama Surjadi as well as Bastien Aymon and Molly Carton.

Inspired gel

Along with other research groups, Portela and his colleagues have typically designed metamaterials by printing or nanofabricating microscopic lattices using conventional polymers similar to plexiglass and ceramic. The specific pattern, or architecture, that they print can impart exceptional strength and impact resistance to the resulting metamaterial.

Several years ago, Portela was curious whether a metamaterial could be made from an inherently stiff material, but be patterned in a way that would turn it into a much softer, stretchier version.

“We realized that the field of metamaterials has not really tried to make an impact in the soft matter realm,” he says. “So far, we’ve all been looking for the stiffest and strongest materials possible.”

Instead, he looked for a way to synthesize softer, stretchier metamaterials. Rather than printing microscopic struts and trusses, similar to those of conventional lattice-based metamaterials, he and his team made an architecture of interwoven springs, or coils. They found that, while the material they used was itself stiff like plexiglass, the resulting woven metamaterial was soft and springy, like rubber.

“They were stretchy, but too soft and compliant,” Portela recalls.

In looking for ways to bulk up their softer metamaterial, the team found inspiration in an entirely different material: hydrogel. Hydrogels are soft, stretchy, Jell-O-like materials that are composed of mostly water and a bit of polymer structure. Researchers including groups at MIT have devised ways to make hydrogels that are both soft and stretchy, and also tough. They do so by combining polymer networks with very different properties, such as a network of molecules that is naturally stiff,  which gets chemically cross-linked with another molecular network that is inherently soft. Portela and his colleagues wondered whether such a double-network design could be adapted to metamaterials.

“That was our ‘aha’ moment,” Portela says. “We thought: Can we get inspiration from these hydrogels to create a metamaterial with similar stiff and stretchy properties?”

Strut and weave

For their new study, the team fabricated a metamaterial by combining two microscopic architectures. The first is a rigid, grid-like scaffold of struts and trusses. The second is a pattern of coils that weave around each strut and truss. Both networks are made from the same acrylic plastic and are printed in one go, using a high-precision, laser-based printing technique called two-photon lithography.

The researchers printed samples of the new double-network-inspired metamaterial, each measuring in size from several square microns to several square millimeters. They put the material through a series of stress tests, in which they attached either end of the sample to a specialized nanomechanical press and measured the force it took to pull the material apart. They also recorded high-resolution videos to observe the locations and ways in which the material stretched and tore as it was pulled apart.

They found their new double-network design was able stretch three times its own length, which also happened to be 10 times farther compared to a conventional lattice-patterned metamaterial printed with the same acrylic plastic. Portela says the new material’s stretchy resistance comes from the interactions between the material’s rigid struts and the messier, coiled weave as the material is stressed and pulled.

“Think of this woven network as a mess of spaghetti tangled around a lattice. As we break the monolithic lattice network, those broken parts come along for the ride, and now all this spaghetti gets entangled with the lattice pieces,” Portela explains. “That promotes more entanglement between woven fibers, which means you have more friction and more energy dissipation.”

In other words, the softer structure wound throughout the material’s rigid lattice takes on more stress thanks to multiple knots or entanglements promoted by the cracked struts. As this stress spreads unevenly through the material, an initial crack is unlikely to go straight through and quickly tear the material. What’s more, the team found that if they introduced strategic holes, or “defects,” in the metamaterial, they could further dissipate any stress that the material undergoes, making it even stretchier and more resistant to tearing apart.

“You might think this makes the material worse,” says study co-author Surjadi. “But we saw once we started adding defects, we doubled the amount of stretch we were able to do, and tripled the amount of energy that we dissipated. That gives us a material that’s both stiff and tough, which is usually a contradiction.”

The team has developed a computational framework that can help engineers estimate how a metamaterial will perform given the pattern of its stiff and stretchy networks. They envision such a blueprint will be useful in designing tear-proof textiles and fabrics.

“We also want to try this approach on more brittle materials, to give them multifunctionality,” Portela says. “So far we’ve talked of mechanical properties, but what if we could also make them conductive, or responsive to temperature? For that, the two networks could be made from different polymers, that respond to temperature in different ways, so that a fabric can open its pores or become more compliant when it’s warm and can be more rigid when it’s cold. That’s something we can explore now.”

This research was supported, in part, by the U.S. National Science Foundation, and the MIT MechE MathWorks Seed Fund. This work was performed, in part, through the use of MIT.nano’s facilities.


MIT D-Lab spinout provides emergency transportation during childbirth

Moving Health has developed an emergency transportation network using motorized ambulances in rural regions of Ghana.


Amama has lived in a rural region of northern Ghana all her life. In 2022, she went into labor with her first child. Women in the region traditionally give birth at home with the help of a local birthing attendant, but Amama experienced last-minute complications, and the decision was made to go to a hospital. Unfortunately, there were no ambulances in the community and the nearest hospital was 30 minutes away, so Amama was forced to take a motorcycle taxi, leaving her husband and caregiver behind.

Amama spent the next 30 minutes traveling over bumpy dirt roads to get to the hospital. She was in pain and afraid. When she arrived, she learned her child had not survived.

Unfortunately, Amama’s story is not unique. Around the world, more than 700 women die every day due to preventable pregnancy and childbirth complications. A lack of transportation to hospitals contributes to those deaths.

Moving Health was founded by MIT students to give people like Amama a safer way to get to the hospital. The company, which was started as part of a class at MIT D-Lab, works with local communities in rural Ghana to offer a network of motorized tricycle ambulances to communities that lack emergency transportation options.

The locally made ambulances are designed for the challenging terrain of rural Ghana, equipped with medical supplies, and have space for caregivers and family members.

“We’re providing the first rural-focused emergency transportation network,” says Moving Health CEO and co-founder Emily Young ’18. “We’re trying to provide emergency transportation coverage for less cost and with a vehicle tailored to local needs. When we first started, a report estimated there were 55 ambulances in the country of over 30 million people. Now, there is more coverage, but still the last mile areas of the country do not have access to reliable emergency transportation.”

Today, Moving Health’s ambulances and emergency transportation network cover more than 100,000 people in northern Ghana who previously lacked reliable medical transportation.

One of those people is Amama. During her most recent pregnancy, she was able to take a Moving Health ambulance to the hospital. This time, she traveled in a sanitary environment equipped with medical supplies and surrounded by loved ones. When she arrived, she gave birth to healthy twins.

From class project to company

Young and Sade Nabahe ’17, SM ’21 met while taking Course 2.722J (D-Lab: Design), which challenges students to think like engineering consultants on international projects. Their group worked on ways to transport pregnant women in remote areas of Tanzania to hospitals more safely and quickly. Young credits D-Lab instructor Matt McCambridge with helping students explore the project outside of class. Fellow Moving Health co-founder Eva Boal ’18 joined the effort the following year.

The early idea was to build a trailer that could attach to any motorcycle and be used to transport women. Following the early class projects, the students received funding from MIT’s PKG Center and the MIT Undergraduate Giving Campaign, which they used to travel to Tanzania in the following year’s Independent Activities Period (IAP). That’s when they built their first prototype in the field.

The founders realized they needed to better understand the problem from the perspective of locals and interviewed over 250 pregnant women, clinicians, motorcycle drivers, and birth attendants.

“We wanted to make sure the community was leading the charge to design what this solution should be. We had to learn more from the community about why emergency transportation doesn’t work in these areas,” Young says. “We ended up redesigning our vehicle completely.”

Following their graduation from MIT in 2018, the founders bought one-way tickets to Tanzania and deployed a new prototype. A big part of their plans was creating a product that could be manufactured by the community to support the local economy.

Nabahe and Boal left the company in 2020, but word spread of Moving Health’s mission, and Young received messages from organizations in about 15 different countries interested in expanding the company’s trials.

Young found the most alignment in Ghana, where she met two local engineers, Ambra Jiberu and Sufiyanu Imoro, who were building cars from scratch and inventing innovative agricultural technologies. With these two engineers joining the team, she was confident they had the team to build a solution in Ghana.

Taking what they’d learned in Tanzania, the new team set up hundreds of interviews and focus groups to understand the Ghanaian health system. The team redesigned their product to be a fully motorized tricycle based on the most common mode of transportation in northern Ghana. Today Moving Health focuses solely on Ghana, with local manufacturing and day-to-day operations led by Country Director and CTO Isaac Quansah. The Moving Health team continued their connection to MIT after being selected by MIT Solve for the 2022 Equitable Health Systems Challenge.

Moving Health is focused on building a holistic emergency transportation network. To do this, Moving Health’s team sets up community-run dispatch systems, which involves organizing emergency phone numbers, training community health workers, dispatchers, and drivers, and integrating all of that within the existing health care system. The company also conducts educational campaigns in the communities it serves.

Moving Health officially launched its ambulances in 2023. The ambulance has an enclosed space for patients, family members, and medical providers and includes a removable stretcher along with supplies like first aid equipment, oxygen, IVs, and more. It costs about one-tenth the price of a traditional ambulance.

“We’ve built a really cool, small-volume manufacturing facility, led by our local engineering team, that has incredible quality,” Young says. “We also have an apprenticeship program that our two lead engineers run that allows young people to learn more hard skills. We want to make sure we’re providing economic opportunities in these communities. It’s very much a Ghanaian-made solution.”

Unlike the national ambulances, Moving Health’s ambulances are stationed in rural communities, at community health centers, to enable faster response times.

“When the ambulances are stationed in these people’s communities, at their local health centers, it makes all the difference,” Young says. “We’re trying to create an emergency transportation solution that is not only geared toward rural areas, but also focused on pregnancy and prioritizing women’s voices about what actually works in these areas.”

A lifeline for mothers

When Young first got to Ghana, she met Sahada, a local woman who shared the story of her first birth at the age of 18. Sahada had intended to give birth in her community with the help of a local birthing attendant, but she began experiencing so much pain during labor the attendant advised her to go to the nearest hospital. With no ambulances or vehicles in town, Sahada’s husband called a motorcycle driver, who took her alone on the three-hour drive to the nearest hospital.

“It was rainy, extremely muddy, and she was in a lot of pain,” Young recounts. “She was already really worried for her baby, and then the bike slips and they crash. They get back on, covered in mud, she has no idea if the baby survived, and finally gets to the maternity ward.”

Sahada was able to give birth to a healthy baby boy, but her story stuck with Young.

“The experience was extremely traumatic, and what’s really crazy is that counts as a successful birth statistic,” Young says. “We hear that kind of story a lot.”

This year, Moving Health plans to expand into a new region of northern Ghana. The team is also exploring other ways their network can provide health care to rural regions. But no matter how the company evolves, the team remain grateful to have seen their D-Lab project turn into such an impactful solution.

“Our long-term vision is to prove that this can work on a national level and supplement the existing health system,” Young says. “Then we’re excited to explore mobile health care outreach and other transportation solutions. We’ve always been focused on maternal health, but we’re staying cognizant of other community ideas that might be able to help improve health care more broadly.”


“Periodic table of machine learning” could fuel AI discovery

Researchers have created a unifying framework that can help scientists combine existing ideas to improve AI models or create new ones.


MIT researchers have created a periodic table that shows how more than 20 classical machine-learning algorithms are connected. The new framework sheds light on how scientists could fuse strategies from different methods to improve existing AI models or come up with new ones.

For instance, the researchers used their framework to combine elements of two different algorithms to create a new image-classification algorithm that performed 8 percent better than current state-of-the-art approaches.

The periodic table stems from one key idea: All these algorithms learn a specific kind of relationship between data points. While each algorithm may accomplish that in a slightly different way, the core mathematics behind each approach is the same.

Building on these insights, the researchers identified a unifying equation that underlies many classical AI algorithms. They used that equation to reframe popular methods and arrange them into a table, categorizing each based on the approximate relationships it learns.

Just like the periodic table of chemical elements, which initially contained blank squares that were later filled in by scientists, the periodic table of machine learning also has empty spaces. These spaces predict where algorithms should exist, but which haven’t been discovered yet.

The table gives researchers a toolkit to design new algorithms without the need to rediscover ideas from prior approaches, says Shaden Alshammari, an MIT graduate student and lead author of a paper on this new framework.

“It’s not just a metaphor,” adds Alshammari. “We’re starting to see machine learning as a system with structure that is a space we can explore rather than just guess our way through.”

She is joined on the paper by John Hershey, a researcher at Google AI Perception; Axel Feldmann, an MIT graduate student; William Freeman, the Thomas and Gerd Perkins Professor of Electrical Engineering and Computer Science and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and senior author Mark Hamilton, an MIT graduate student and senior engineering manager at Microsoft. The research will be presented at the International Conference on Learning Representations.

An accidental equation

The researchers didn’t set out to create a periodic table of machine learning.

After joining the Freeman Lab, Alshammari began studying clustering, a machine-learning technique that classifies images by learning to organize similar images into nearby clusters.

She realized the clustering algorithm she was studying was similar to another classical machine-learning algorithm, called contrastive learning, and began digging deeper into the mathematics. Alshammari found that these two disparate algorithms could be reframed using the same underlying equation.

“We almost got to this unifying equation by accident. Once Shaden discovered that it connects two methods, we just started dreaming up new methods to bring into this framework. Almost every single one we tried could be added in,” Hamilton says.

The framework they created, information contrastive learning (I-Con), shows how a variety of algorithms can be viewed through the lens of this unifying equation. It includes everything from classification algorithms that can detect spam to the deep learning algorithms that power LLMs.

The equation describes how such algorithms find connections between real data points and then approximate those connections internally.

Each algorithm aims to minimize the amount of deviation between the connections it learns to approximate and the real connections in its training data.

They decided to organize I-Con into a periodic table to categorize algorithms based on how points are connected in real datasets and the primary ways algorithms can approximate those connections.

“The work went gradually, but once we had identified the general structure of this equation, it was easier to add more methods to our framework,” Alshammari says.

A tool for discovery

As they arranged the table, the researchers began to see gaps where algorithms could exist, but which hadn’t been invented yet.

The researchers filled in one gap by borrowing ideas from a machine-learning technique called contrastive learning and applying them to image clustering. This resulted in a new algorithm that could classify unlabeled images 8 percent better than another state-of-the-art approach.

They also used I-Con to show how a data debiasing technique developed for contrastive learning could be used to boost the accuracy of clustering algorithms.

In addition, the flexible periodic table allows researchers to add new rows and columns to represent additional types of datapoint connections.

Ultimately, having I-Con as a guide could help machine learning scientists think outside the box, encouraging them to combine ideas in ways they wouldn’t necessarily have thought of otherwise, says Hamilton.

“We’ve shown that just one very elegant equation, rooted in the science of information, gives you rich algorithms spanning 100 years of research in machine learning. This opens up many new avenues for discovery,” he adds.

“Perhaps the most challenging aspect of being a machine-learning researcher these days is the seemingly unlimited number of papers that appear each year. In this context, papers that unify and connect existing algorithms are of great importance, yet they are extremely rare. I-Con provides an excellent example of such a unifying approach and will hopefully inspire others to apply a similar approach to other domains of machine learning,” says Yair Weiss, a professor in the School of Computer Science and Engineering at the Hebrew University of Jerusalem, who was not involved in this research.

This research was funded, in part, by the Air Force Artificial Intelligence Accelerator, the National Science Foundation AI Institute for Artificial Intelligence and Fundamental Interactions, and Quanta Computer.


Kripa Varanasi named faculty director of the Deshpande Center for Technological Innovation

The interfacial engineering expert and prolific entrepreneur will help faculty and students take breakthroughs from lab to market.


Kripa Varanasi, professor of mechanical engineering, was named faculty director of the MIT Deshpande Center for Technological Innovation, effective March 1.

“Kripa is widely recognized for his significant contributions in the field of interfacial science, thermal fluids, electrochemical systems, and advanced materials. It’s remarkable to see the tangible impact Kripa’s ventures have made across such a wide range of fields,” says Anantha P. Chandrakasan, dean of the School of Engineering, chief innovation and strategy officer, and Vannevar Bush Professor of Electrical Engineering and Computer Science. “From energy and water conservation to consumer products and agriculture, his solutions are making a real difference. The Deshpande Center will benefit greatly from both his entrepreneurial expertise and deep technical insight.”

The MIT Deshpande Center for Technological Innovation is an interdepartmental center that empowers MIT students and faculty to make a difference in the world by helping them bring their innovative technologies from the lab to the marketplace in the form of breakthrough products and new companies. The center was established through a gift from philanthropist Guruaj “Desh” Deshpande and his wife, Jaishree.

“Kripa brings an entrepreneurial spirit, innovative thinking, and commitment to mentorship that has always been central to the Deshpande Center’s mission,” says Deshpande. “He is exceptionally well-positioned to help the next generation of MIT innovators turn bold ideas into real-world solutions that make a difference.”

Varanasi has seen the Deshpande Center’s influence on the MIT community since its founding in 2002, when he was a graduate student.

“The Deshpande Center was founded when I was a graduate student, and it truly inspired many of us to think about entrepreneurship and commercialization — with Desh himself being an incredible role model,” says Varanasi. “Over the years, the center has built a storied legacy as a one-of-a-kind institution for propelling university-invented technologies to commercialization. Many amazing companies have come out of this program, shaping industries and making a real impact.”

A member of the MIT faculty since 2009, Varanasi leads the interdisciplinary Varanasi Research Group, which focuses on understanding physico-chemical and biological phenomena at the interfaces of matter. His group develops novel surfaces, materials, and technologies that improve efficiency and performance across industries, including energy, decarbonization, life sciences, water, agriculture, transportation, and consumer products.

In addition to his academic work, Varanasi is a prolific entrepreneur who has co-founded six companies, including AgZen, Alsym Energy, CoFlo Medical, Dropwise, Infinite Cooling, and LiquiGlide, which was a Deshpande Center grantee in 2009. These ventures aim to translate research breakthroughs into products with global reach.

His companies have been widely recognized for driving innovation across a range of industries. LiquiGlide, which produces frictionless liquid coatings, was named one of Time and Forbes’ “Best Inventions of the Year” in 2012. Infinite Cooling, which offers a technology to capture and recycle power plant water vapor, has won the U.S. Department of Energy’s National Cleantech University Prize and top prizes at MassChallenge and the MIT $100K competition. It is also a participating company at this year’s IdeaStream: Next Gen event, hosted by the Deshpande Center.

Another company that Varanasi co-founded, AgZen, is pioneering feedback optimization for agrochemical application that allows farmers to use 30-90 percent less pesticides and fertilizers while achieving 1-10 percent more yield. Meanwhile, Alsym Energy is advancing nonflammable, high-performance batteries for energy storage solutions that are lithium-free and capable of a wide range of storage durations. 

Throughout his career, Varanasi has been recognized for both research excellence and mentorship. His honors include the National Science Foundation CAREER Award, DARPA Young Faculty Award, SME Outstanding Young Manufacturing Engineer Award, ASME’s Bergles-Rohsenow Heat Transfer Award and Gustus L. Larson Memorial Award, Boston Business Journal’s 40 Under 40, and MIT’s Frank E. Perkins Award for Excellence in Graduate Advising​.

Varanasi earned his undergraduate degree in mechanical engineering from the Indian Institute of Technology Madras, and his master’s degree and PhD from MIT. Prior to joining the Institute’s faculty, he served as lead researcher and project leader at the GE Global Research Center, where he received multiple internal awards for innovation and technical excellence​.

"It’s an honor to lead the Deshpande Center, and in collaboration with the MIT community, I look forward to building on its incredible foundation — fostering bold ideas, driving real-world impact from cutting-edge innovations, and making it a powerhouse for commercialization,” adds Varanasi.

As faculty director, Varanasi will work closely with Deshpande Center executive director Rana Gupta to guide the center’s support of MIT faculty and students developing technology-based ventures.

“With Kripa’s depth and background, we will capitalize on the initiatives started with Angela Koehler. Kripa shares our vision to grow and expand the center’s capabilities to serve more of MIT,” adds Gupta.

Varanasi succeeds Angela Koehler, associate professor of biological engineering, who served as faculty director from July 2023 through March 2025.

“Angela brought fresh vision and energy to the center,” he says. “She expanded its reach, introduced new funding priorities in climate and life sciences, and re-imagined the annual IdeaStream event as a more robust launchpad for innovation. We’re deeply grateful for her leadership.”

Koehler, who was recently appointed faculty lead of the MIT Health and Life Sciences Collaborative, will continue to play a key role in the Institute’s innovation and entrepreneurship ecosystem​.


Julie Lucas to step down as MIT’s vice president for resource development

Lucas has led MIT’s fundraising since 2014, including the record-setting MIT Campaign for a Better World.


Julie A. Lucas has decided to step down as MIT’s vice president for resource development, President Sally Kornbluth announced today. Lucas has set her last day as June 30, which coincides with the close of the Institute’s fiscal year, to ensure a smooth transition for staff and donors. 

Lucas has led fundraising at the Institute since 2014. During that time, MIT’s average annual fundraising has increased 96 percent to $611 million, up from $313 million in the decade before her arrival. MIT’s annual fundraising totals have exceeded the Institute’s annual $500 million fundraising target for nine straight fiscal years, including a few banner fiscal years with results upward of $700 to $900 million.

“Before I arrived at MIT, Julie built a fundraising operation worthy of the Institute’s world-class stature,” Kornbluth says. “I have seen firsthand how Julie’s expertise, collegial spirit, and commitment to our mission resonates with alumni and friends, motivating them to support the Institute.”

Lucas spearheaded the MIT Campaign for a Better World, which concluded in 2021 and raised $6.2 billion, setting a record as the Institute’s largest fundraising initiative. Emphasizing the Institute’s hands-on approach to solving the world’s toughest challenges — and centered on its strengths in education, research, and innovation — the campaign attracted participation from more than 112,000 alumni and friends around the globe, including nearly 56,000 new donors.  

“From the moment I met Julie Lucas, I knew she was the right person to serve as MIT’s chief philanthropic leader of our capital campaign,” says MIT President Emeritus L. Rafael Reif. “Julie is both a ‘maker’ and a ‘doer,’ well attuned to our ‘mens et manus’ motto. The Institute has benefited immensely from her impressive set of skills and ability to convey a coherent message that has inspired and motivated alumni and friends, foundations and corporations, to support MIT.” 

Under Lucas, MIT’s Office of Resource Development (RD) created new fundraising programs and processes, and introduced expanded ways of giving. For example, RD established the Institute’s planned giving program, which supports donors who want to make a lasting impact at MIT through philanthropic vehicles such as bequests, retirement plan distributions, life-income gifts, and gifts of complex assets. She also played a lead role in creating a donor-advised fund at MIT that, since its inception in 2017, has seen almost $120 million in contributions.  

“Julie is a remarkable fundraiser and leader — and when it comes to Julie’s leadership of Resource Development, the results speak for themselves,” says Mark Gorenberg ’76, chair of the MIT Corporation, who has participated in multiple MIT committees and campaigns over the last two decades. “These tangible fundraising outcomes have helped to facilitate innovations and discoveries, expand educational programs and facilities, support faculty and researchers, and ensure that an MIT education is affordable and accessible to the brightest minds from around the world.”

Prior to joining MIT, Lucas served in senior fundraising roles at the University of Southern California and Fordham Law School, as well as New York University and its business and law schools. 

While Lucas readies herself for the next phase in her career, she remains grateful for her time at the Institute. 

“Philanthropy is a powerful fuel for good in our world,” Lucas says. “My decision to step down was difficult. I feel honored and thankful that my work — and the work of the team of professionals I lead in Resource Development — has helped continue the amazing trajectory of MIT research and innovation that benefits all of us by solving humanity’s greatest challenges, both now and in the future.”

Lucas currently serves on the steering committee and is the immediate past chair of CASE 50, the Council for Advancement and Support of Education group that includes the top 50 fundraising institutions in the world. In addition, she is chair of the 2025 CASE Summit for Leaders in Advancement and a founding member of Aspen Leadership Group’s Chief Development Officer Network.


Astronomers discover a planet that’s rapidly disintegrating, producing a comet-like tail

The small and rocky lava world sheds an amount of material equivalent to the mass of Mount Everest every 30.5 hours.


MIT astronomers have discovered a planet some 140 light-years from Earth that is rapidly crumbling to pieces.

The disintegrating world is about the mass of Mercury, although it circles about 20 times closer to its star than Mercury does to the sun, completing an orbit every 30.5 hours. At such close proximity to its star, the planet is likely covered in magma that is boiling off into space. As the roasting planet whizzes around its star, it is shedding an enormous amount of surface minerals and effectively evaporating away.

The astronomers spotted the planet using NASA’s Transiting Exoplanet Survey Satellite (TESS), an MIT-led mission that monitors the nearest stars for transits, or periodic dips in starlight that could be signs of orbiting exoplanets. The signal that tipped the astronomers off was a peculiar transit, with a dip that fluctuated in depth every orbit.

The scientists confirmed that the signal is of a tightly orbiting rocky planet that is trailing a long, comet-like tail of debris.

“The extent of the tail is gargantuan, stretching up to 9 million kilometers long, or roughly half of the planet’s entire orbit,” says Marc Hon, a postdoc in MIT’s Kavli Institute for Astrophysics and Space Research.

It appears that the planet is disintegrating at a dramatic rate, shedding an amount of material equivalent to one Mount Everest each time it orbits its star. At this pace, given its small mass, the researchers predict that the planet may completely disintegrate in about 1 million to 2 million years.

“We got lucky with catching it exactly when it’s really going away,” says Avi Shporer, a collaborator on the discovery who is also at the TESS Science Office. “It’s like on its last breath.”

Hon and Shporer, along with their colleagues, have published their results today in the Astrophysical Journal Letters. Their MIT co-authors include Saul Rappaport, Andrew Vanderburg, Jeroen Audenaert, William Fong, Jack Haviland, Katharine Hesse, Daniel Muthukrishna, Glen Petitpas, Ellie Schmelzer, Sara Seager, and George Ricker, along with collaborators from multiple other institutions.

Roasting away

The new planet, which scientists have tagged as BD+05 4868 Ab, was detected almost by happenstance.

“We weren’t looking for this kind of planet,” Hon says. “We were doing the typical planet vetting, and I happened to spot this signal that appeared very unusual.”

The typical signal of an orbiting exoplanet looks like a brief dip in a light curve, which repeats regularly, indicating that a compact body such as a planet is briefly passing in front of, and temporarily blocking, the light from its host star.

This typical pattern was unlike what Hon and his colleagues detected from the host star BD+05 4868 A, located in the constellation of Pegasus. Though a transit appeared every 30.5 hours, the brightness took much longer to return to normal, suggesting a long trailing structure still blocking starlight. Even more intriguing, the depth of the dip changed with each orbit, suggesting that whatever was passing in front of the star wasn’t always the same shape or blocking the same amount of light.

“The shape of the transit is typical of a comet with a long tail,” Hon explains. “Except that it’s unlikely that this tail contains volatile gases and ice as expected from a real comet — these would not survive long at such close proximity to the host star. Mineral grains evaporated from the planetary surface, however, can linger long enough to present such a distinctive tail.”

Given its proximity to its star, the team estimates that the planet is roasting at around 1,600 degrees Celsius, or close to 3,000 degrees Fahrenheit. As the star roasts the planet, any minerals on its surface are likely boiling away and escaping into space, where they cool into a long and dusty tail.

The dramatic demise of this planet is a consequence of its low mass, which is between that of Mercury and the moon. More massive terrestrial planets like the Earth have a stronger gravitational pull and therefore can hold onto their atmospheres. For BD+05 4868 Ab, the researchers suspect there is very little gravity to hold the planet together.

“This is a very tiny object, with very weak gravity, so it easily loses a lot of mass, which then further weakens its gravity, so it loses even more mass,” Shporer explains. “It’s a runaway process, and it’s only getting worse and worse for the planet.”

Mineral trail

Of the nearly 6,000 planets that astronomers have discovered to date, scientists know of only three other disintegrating planets beyond our solar system. Each of these crumbling worlds were spotted over 10 years ago using data from NASA’s Kepler Space Telescope. All three planets were spotted with similar comet-like tails. BD+05 4868 Ab has the longest tail and the deepest transits out of the four known disintegrating planets to date.

“That implies that its evaporation is the most catastrophic, and it will disappear much faster than the other planets,” Hon explains.

The planet’s host star is relatively close, and thus brighter than the stars hosting the other three disintegrating planets, making this system ideal for further observations using NASA’s James Webb Space Telescope (JWST), which can help determine the mineral makeup of the dust tail by identifying which colors of infrared light it absorbs.

This summer, Hon and graduate student Nicholas Tusay from Penn State University will lead observations of BD+05 4868 Ab using JWST. “This will be a unique opportunity to directly measure the interior composition of a rocky planet, which may tell us a lot about the diversity and potential habitability of terrestrial planets outside our solar system,” Hon says.

The researchers also will look through TESS data for signs of other disintegrating worlds.

“Sometimes with the food comes the appetite, and we are now trying to initiate the search for exactly these kinds of objects,” Shporer says. “These are weird objects, and the shape of the signal changes over time, which is something that’s difficult for us to find. But it’s something we’re actively working on.”

This work was supported, in part, by NASA.


MIT’s McGovern Institute is shaping brain science and improving human lives on a global scale

A quarter century after its founding, the McGovern Institute reflects on its discoveries in the areas of neuroscience, neurotechnology, artificial intelligence, brain-body connections, and therapeutics.


In 2000, Patrick J. McGovern ’59 and Lore Harp McGovern made an extraordinary gift to establish the McGovern Institute for Brain Research at MIT, driven by their deep curiosity about the human mind and their belief in the power of science to change lives. Their $350 million pledge began with a simple yet audacious vision: to understand the human brain in all its complexity, and to leverage that understanding for the betterment of humanity.
 
Twenty-five years later, the McGovern Institute stands as a testament to the power of interdisciplinary collaboration, continuing to shape our understanding of the brain and improve the quality of life for people worldwide.

In the beginning

“This is, by any measure, a truly historic moment for MIT,” said MIT’s 15th president, Charles M. Vest, during his opening remarks at an event in 2000 to celebrate the McGovern gift agreement. “The creation of the McGovern Institute will launch one of the most profound and important scientific ventures of this century in what surely will be a cornerstone of MIT scientific contributions from the decades ahead.”
 
Vest tapped Phillip A. Sharp, MIT Institute professor emeritus of biology and Nobel laureate, to lead the institute, and appointed six MIT professors — Emilio Bizzi, Martha Constantine-Paton, Ann Graybiel PhD ’71, H. Robert Horvitz ’68, Nancy Kanwisher ’80, PhD ’86, and Tomaso Poggio — to represent its founding faculty.  Construction began in 2003 on Building 46, a 376,000 square foot research complex at the northeastern edge of campus. MIT’s new “gateway from the north” would eventually house the McGovern Institute, the Picower Institute for Learning and Memory, and MIT’s Department of Brain and Cognitive Sciences.

Robert Desimone, the Doris and Don Berkey Professor of Neuroscience at MIT, succeeded Sharp as director of the McGovern Institute in 2005, and assembled a distinguished roster of 22 faculty members, including a Nobel laureate, a Breakthrough Prize winner, two National Medal of Science/Technology awardees, and 15 members of the American Academy of Arts and Sciences.
 
A quarter century of innovation

On April 11, 2025, the McGovern Institute celebrated its 25th anniversary with a half-day symposium featuring presentations by MIT Institute Professor Robert Langer, alumni speakers from various McGovern labs, and Desimone, who is in his 20th year as director of the institute.

Desimone highlighted the institute’s recent discoveries, including the development of the CRISPR genome-editing system, which has culminated in the world’s first CRISPR gene therapy approved for humans — a remarkable achievement that is ushering in a new era of transformative medicine. In other milestones, McGovern researchers developed the first prosthetic limb fully controlled by the body’s nervous system; a flexible probe that taps into gut-brain communication; an expansion microscopy technique that paves the way for biology labs around the world to perform nanoscale imaging; and advanced computational models that demonstrate how we see, hear, use language, and even think about what others are thinking. Equally transformative has been the McGovern Institute’s work in neuroimaging, uncovering the architecture of human thought and establishing markers that signal the early emergence of mental illness, before symptoms even appear.

Synergy and open science
 
“I am often asked what makes us different from other neuroscience institutes and programs around the world,” says Desimone. “My answer is simple. At the McGovern Institute, the whole is greater than the sum of its parts.”
 
Many discoveries at the McGovern Institute have depended on collaborations across multiple labs, ranging from biological engineering to human brain imaging and artificial intelligence. In modern brain research, significant advances often require the joint expertise of people working in neurophysiology, behavior, computational analysis, neuroanatomy, and molecular biology. More than a dozen different MIT departments are represented by McGovern faculty and graduate students, and this synergy has led to insights and innovations that are far greater than what any single discipline could achieve alone.
 
Also baked into the McGovern ethos is a spirit of open science, where newly developed technologies are shared with colleagues around the world. Through hospital partnerships for example, McGovern researchers are testing their tools and therapeutic interventions in clinical settings, accelerating their discoveries into real-world solutions.

The McGovern legacy  

Hundreds of scientific papers have emerged from McGovern labs over the past 25 years, but most faculty would argue that it’s the people — the young researchers — that truly define the McGovern Institute. Award-winning faculty often attract the brightest young minds, but many McGovern faculty also serve as mentors, creating a diverse and vibrant scientific community that is setting the global standard for brain research and its applications. Kanwisher, for example, has guided more than 70 doctoral students and postdocs who have gone on to become leading scientists around the world. Three of her former students, Evelina Fedorenko PhD ’07, Josh McDermott PhD ’06, and Rebecca Saxe PhD ’03, the John W. Jarve (1978) Professor of Brain and Cognitive Sciences, are now her colleagues at the McGovern Institute. Other McGovern alumni shared stories of mentorship, science, and real-world impact at the 25th anniversary symposium.

Looking to the future, the McGovern community is more committed than ever to unraveling the mysteries of the brain and making a meaningful difference in lives of individuals at a global scale.
 
“By promoting team science, open communication, and cross-discipline partnerships,” says institute co-founder Lore Harp McGovern, “our culture demonstrates how individual expertise can be amplified through collective effort. I am honored to be the co-founder of this incredible institution — onward to the next 25 years!”


Equipping living cells with logic gates to fight cancer

Founded by MIT researchers, Senti Bio is giving immune cells the ability to distinguish between healthy and cancerous cells.


One of the most exciting developments in cancer treatment is a wave of new cell therapies that train a patient’s immune system to attack cancer cells. Such therapies have saved the lives of patients with certain aggressive cancers and few other options. Most of these therapies work by teaching immune cells to recognize and attack specific proteins on the surface of cancer cells.

Unfortunately, most proteins found on cancer cells aren’t unique to tumors. They’re also often present on healthy cells, making it difficult to target cancer aggressively without triggering dangerous attacks on other tissue. The problem has limited the application of cell therapies to a small subset of cancers.

Now Senti Bio is working to create smarter cell therapies using synthetic biology. The company, which was founded by former MIT faculty member and current MIT Research Associate Tim Lu ’03, MEng ’03, PhD ’08 and Professor James Collins, is equipping cells with gene circuits that allow the cells to sense and respond to their environments.

Lu, who studied computer science as an undergraduate at MIT, describes Senti’s approach as programming living cells to behave more like computers — responding to specific biological cues with “if/then” logic, just like computer code.

“We have innovated a cell therapy that says, ‘Kill anything displaying the cancer target, but spare anything that has this healthy target,’” Lu explains. “Despite the promise of certain cancer targets, problems can arise when they are expressed on healthy cells that we want to protect. Our logic gating technology was designed to recognize and avoid killing those healthy cells, which introduces a whole spectrum of additional cancers that don’t have a single clean target that we can now potentially address. That’s the power of embedding these cells with logic.”

The company’s lead drug candidate aims to help patients with acute myeloid leukemia (AML) who have experienced a relapse or are unresponsive to other therapies. The prognosis for such patients is poor, but early data from the company’s first clinical trial showed that two of the first three patients Senti treated experienced complete remission, where subsequent bone marrow tests couldn’t detect a single cancer cell.

“It’s essentially one of the best responses you can get in this disease, so we were really excited to see that,” says Lu, who served on MIT’s faculty until leaving to lead Senti in 2022.

Senti is expecting to release more patient data at the upcoming American Association for Cancer Research (AACR) meeting at the end of April.

“Our groundbreaking work at Senti is showing that one can harness synthetic biology technologies to create programmable, smart medicines for treating patients with cancer,” says Collins, who is currently MIT’s Termeer Professor of Medical Engineering and Science. “This is tremendously exciting and demonstrates how one can utilize synthetic biological circuits, in this case logic gates, to design highly effective, next-generation living therapeutics.”

From computer science to cancer care

Lu was inspired as an undergraduate studying electrical engineering and computer science by the Human Genome Project, an international race to sequence the human genome. Later, he entered the Harvard-MIT Health Sciences and Technology (HST) program, through which he earned a PhD from MIT in electrical and biomedical imaging and an MD from Harvard. During that time, he worked in the lab of his eventual Senti co-founder James Collins, a synthetic biology pioneer.

In 2010, Lu joined MIT as an assistant professor with a joint appointment in the departments of Biological Engineering and of Electrical Engineering and Computer Science. Over the course of the next 14 years, Lu led the Synthetic Biology Group at MIT and started several biotech companies, including Engine Biosciences and Tango Therapeutics, which are also developing precision cancer treatments.

In 2015, a group of researchers including Lu and MIT Institute Professor Phillip Sharp published research showing they could use gene circuits to get immune cells to selectively respond to tumor cells in their environment.

“One of the first things we published focused on the idea of logic gates in living cells,” Lu says. “A computer has ‘and’ gates, ‘or’ gates, and ‘not’ gates that allow it to perform computations, and we started publishing gene circuits that implement logic into living cells. These allow cells to detect signals and then make logical decisions like, ‘Should we switch on or off?’”

Around that time, the first cell therapies and cancer immunotherapies began to be approved by the Food and Drug Administration, and the founders saw their technology as a way to take those approaches to the next level. They officially founded Senti Bio in 2016, with Lu taking a sabbatical from MIT to serve as CEO.

The company licensed technology from MIT and subsequently advanced the cellular logic gates so they could work with multiple types of engineered immune cells, including T cells and “natural killer” cells. Senti’s cells can respond to specific proteins that exist on the surface of both cancer and healthy cells to increase selectivity.

“We can now create a cell therapy where the cell makes a decision as to whether to kill a cancer cell or spare a healthy cell even when those cells are right next to each other,” Lu says. “If you can’t distinguish between cancerous and healthy cells, you get unwanted side effects, or you may not be able to hit the cancer as hard as you’d like. But once you can do that, there’s a lot of ways to maximize your firepower against the cancer cells.”

Hope for patients

Senti’s lead clinical trial is focusing on patients with relapsed or refractory blood cancers, including AML.

“Obviously the most important thing is getting a good response for patients,” Lu says. “But we’re also doing additional scientific work to confirm that the logic gates are working the way we expect them to in humans. Based on that information, we can then deploy logic gates into additional therapeutic indications such as solid tumors, where you have a lot of the same problems with finding a target.”

Another company that has partnered with Senti to use some of Senti’s technology also has an early clinical trial underway in liver cancer. Senti is also partnering with other companies to apply its gene circuit technology in areas like regenerative medicine and neuroscience.

“I think this is broader than just cell therapies,” Lu says. “We believe if we can prove this out in AML, it will lead to a fundamentally new way of diagnosing and treating cancer, where we’re able to definitively identify and target cancer cells and spare healthy cells. We hope it will become a whole new class of medicines moving forward.”


Making AI-generated code more accurate in any language

A new technique automatically guides an LLM toward outputs that adhere to the rules of whatever programming language or other format is being used.


Programmers can now use large language models (LLMs) to generate computer code more quickly. However, this only makes programmers’ lives easier if that code follows the rules of the programming language and doesn’t cause a computer to crash.

Some methods exist for ensuring LLMs conform to the rules of whatever language they are generating text in, but many of these methods either distort the model’s intended meaning or are too time-consuming to be feasible for complex tasks.

A new approach developed by researchers at MIT and elsewhere automatically guides an LLM to generate text that adheres to the rules of the relevant language, such as a particular programming language, and is also error-free. Their method allows an LLM to allocate efforts toward outputs that are most likely to be valid and accurate, while discarding unpromising outputs early in the process. This probabilistic approach boosts computational efficiency.

Due to these efficiency gains, the researchers’ architecture enabled small LLMs to outperform much larger models in generating accurate, properly structured outputs for several real-world use cases, including molecular biology and robotics.

In the long run, this new architecture could help nonexperts control AI-generated content. For instance, it could allow businesspeople to write complex queries in SQL, a language for database manipulation, using only natural language prompts.

“This work has implications beyond research. It could improve programming assistants, AI-powered data analysis, and scientific discovery tools by ensuring that AI-generated outputs remain both useful and correct,” says João Loula, an MIT graduate student and co-lead author of a paper on this framework.

Loula is joined on the paper by co-lead authors Benjamin LeBrun, a research assistant at the Mila-Quebec Artificial Intelligence Institute, and Li Du, a graduate student at John Hopkins University; co-senior authors Vikash Mansinghka ’05, MEng ’09, PhD ’09, a principal research scientist and leader of the Probabilistic Computing Project in the MIT Department of Brain and Cognitive Sciences; Alexander K. Lew SM ’20, an assistant professor at Yale University; Tim Vieira, a postdoc at ETH Zurich; and Timothy J. O’Donnell, an associate professor at McGill University and a Canada CIFAR AI Chair at Mila, who led the international team; as well as several others. The research will be presented at the International Conference on Learning Representations.

Enforcing structure and meaning

One common approach for controlling the structured text generated by LLMs involves checking an entire output, like a block of computer code, to make sure it is valid and will run error-free. If not, the user must start again, racking up computational resources.

On the other hand, a programmer could stop to check the output along the way. While this can ensure the code adheres to the programming language and is structurally valid, incrementally correcting the code may cause it to drift from the meaning the user intended, hurting its accuracy in the long run.

“It is much easier to enforce structure than meaning. We can quickly check whether something is in the right programming language, but to check its meaning you have to execute the code. Our work is also about dealing with these different types of information,” Loula says.

The researchers’ approach involves engineering knowledge into the LLM to steer it toward the most promising outputs. These outputs are more likely to follow the structural constraints defined by a user, and to have the meaning the user intends.

“We are not trying to train an LLM to do this. Instead, we are engineering some knowledge that an expert would have and combining it with the LLM’s knowledge, which offers a very different approach to scaling than you see in deep learning,” Mansinghka adds.

They accomplish this using a technique called sequential Monte Carlo, which enables parallel generation from an LLM to compete with each other. The model dynamically allocates resources to different threads of parallel computation based on how promising their output appears.

Each output is given a weight that represents how likely it is to be structurally valid and semantically accurate. At each step in the computation, the model focuses on those with higher weights and throws out the rest.

In a sense, it is like the LLM has an expert looking over its shoulder to ensure it makes the right choices at each step, while keeping it focused on the overall goal. The user specifies their desired structure and meaning, as well as how to check the output, then the researchers’ architecture guides the LLM to do the rest.

“We’ve worked out the hard math so that, for any kinds of constraints you’d like to incorporate, you are going to get the proper weights. In the end, you get the right answer,” Loula says.

Boosting small models

To test their approach, they applied the framework to LLMs tasked with generating four types of outputs: Python code, SQL database queries, molecular structures, and plans for a robot to follow.

When compared to existing approaches, the researchers’ method performed more accurately while requiring less computation.

In Python code generation, for instance, the researchers’ architecture enabled a small, open-source model to outperform a specialized, commercial closed-source model that is more than double its size.

“We are very excited that we can allow these small models to punch way above their weight,” Loula says.

Moving forward, the researchers want to use their technique to control larger chunks of generated text, rather than working one small piece at a time. They also want to combine their method with learning, so that as they control the outputs a model generates, it learns to be more accurate.

In the long run, this project could have broader applications for non-technical users. For instance, it could be combined with systems for automated data modeling, and querying generative models of databases.

The approach could also enable machine-assisted data analysis systems, where the user can converse with software that accurately models the meaning of the data and the questions asked by the user, adds Mansinghka.

“One of the fundamental questions of linguistics is how the meaning of words, phrases, and sentences can be grounded in models of the world, accounting for uncertainty and vagueness in meaning and reference. LLMs, predicting likely token sequences, don’t address this problem. Our paper shows that, in narrow symbolic domains, it is technically possible to map from words to distributions on grounded meanings. It’s a small step towards deeper questions in cognitive science, linguistics, and artificial intelligence needed to understand how machines can communicate about the world like we do,” says O’Donnell.

This research is funded and supported, in part, by the Canada CIFAR AI Chairs Program, the MIT Quest for Intelligence, and Convergent Research. 


Adam Berinsky awarded Carnegie fellowship

MIT political science professor among cohort of fellows who will focus on building a body of research on political polarization.


MIT political scientist Adam Berinsky has been named to the 2025 class of Andrew Carnegie Fellows, a high-profile honor for scholars pursuing research in the social sciences and humanities.

The fellowship is provided by The Carnegie Corp. of New York. Berinsky, the Mitsui Professor of Political Science, and 25 other fellows were selected from more than 300 applicants. They will each receive stipends of $200,000 for research that seeks to understand how and why our society has become so polarized, and how we can strengthen the forces of cohesion to fortify our democracy.

“Through these fellowships Carnegie is harnessing the unrivaled brainpower of our universities to help us to understand how our society has become so polarized,” says Carnegie President Louise Richardson. “Our future grant-making will be informed by what we learn from these scholars as we seek to mitigate the pernicious effects of political polarization.”

Berinsky said he is “incredibly honored to be named an Andrew Carnegie Fellow for the coming year. This fellowship will allow me to work on critical issues in the current political moment.”

During his year as a Carnegie Fellow, Berinsky will be working on a project, “Fostering an Accurate Information Ecosystem to Mitigate Polarization in the United States.

“For a functioning democracy, it is essential that citizens share a baseline of common facts,” says Berinsky. “However, in today’s politically polarized climate, ‘alternative facts,’ and other forms of misinformation — from political rumors to conspiracy theories — distort how people see reality, and damage our social fabric.”

“I’ve spent the last 15 years investigating why individuals accept misinformation and how to counter misperceptions. But there is still a lot of work to be done. My project aims to tackle the serious problem of misinformation in the United States by bringing together existing approaches in new, more powerful combinations. I’m hoping that the whole can be more than the sum of its parts.”

Berinsky has been a member of the MIT faculty since 2003. He is the author of “Political Rumors: Why We Accept Misinformation and How to Fight It” (Princeton University Press, 2023).

Other MIT faculty who have received the Carnegie Fellowship in recent years include economists David Autor and Daron Acemoglu and political scientists Fotini Christia, Taylor Fravel, Richard Nielsen, and Charles Stewart.


New study reveals how cleft lip and cleft palate can arise

MIT biologists have found that defects in some transfer RNA molecules can lead to the formation of these common conditions.


Cleft lip and cleft palate are among the most common birth defects, occurring in about one in 1,050 births in the United States. These defects, which appear when the tissues that form the lip or the roof of the mouth do not join completely, are believed to be caused by a mix of genetic and environmental factors.

In a new study, MIT biologists have discovered how a genetic variant often found in people with these facial malformations leads to the development of cleft lip and cleft palate.

Their findings suggest that the variant diminishes cells’ supply of transfer RNA, a molecule that is critical for assembling proteins. When this happens, embryonic face cells are unable to fuse to form the lip and roof of the mouth.

“Until now, no one had made the connection that we made. This particular gene was known to be part of the complex involved in the splicing of transfer RNA, but it wasn’t clear that it played such a crucial role for this process and for facial development. Without the gene, known as DDX1, certain transfer RNA can no longer bring amino acids to the ribosome to make new proteins. If the cells can’t process these tRNAs properly, then the ribosomes can’t make protein anymore,” says Michaela Bartusel, an MIT research scientist and the lead author of the study.

Eliezer Calo, an associate professor of biology at MIT, is the senior author of the paper, which appears today in the American Journal of Human Genetics.

Genetic variants

Cleft lip and cleft palate, also known as orofacial clefts, can be caused by genetic mutations, but in many cases, there is no known genetic cause.

“The mechanism for the development of these orofacial clefts is unclear, mostly because they are known to be impacted by both genetic and environmental factors,” Calo says. “Trying to pinpoint what might be affected has been very challenging in this context.”

To discover genetic factors that influence a particular disease, scientists often perform genome-wide association studies (GWAS), which can reveal variants that are found more often in people who have a particular disease than in people who don’t.

For orofacial clefts, some of the genetic variants that have regularly turned up in GWAS appeared to be in a region of DNA that doesn’t code for proteins. In this study, the MIT team set out to figure out how variants in this region might influence the development of facial malformations.

Their studies revealed that these variants are located in an enhancer region called e2p24.2. Enhancers are segments of DNA that interact with protein-coding genes, helping to activate them by binding to transcription factors that turn on gene expression.

The researchers found that this region is in close proximity to three genes, suggesting that it may control the expression of those genes. One of those genes had already been ruled out as contributing to facial malformations, and another had already been shown to have a connection. In this study, the researchers focused on the third gene, which is known as DDX1.

DDX1, it turned out, is necessary for splicing transfer RNA (tRNA) molecules, which play a critical role in protein synthesis. Each transfer RNA molecule transports a specific amino acid to the ribosome — a cell structure that strings amino acids together to form proteins, based on the instructions carried by messenger RNA.

While there are about 400 different tRNAs found in the human genome, only a fraction of those tRNAs require splicing, and those are the tRNAs most affected by the loss of DDX1. These tRNAs transport four different amino acids, and the researchers hypothesize that these four amino acids may be particularly abundant in proteins that embryonic cells that form the face need to develop properly.

When the ribosomes need one of those four amino acids, but none of them are available, the ribosome can stall, and the protein doesn’t get made.

The researchers are now exploring which proteins might be most affected by the loss of those amino acids. They also plan to investigate what happens inside cells when the ribosomes stall, in hopes of identifying a stress signal that could potentially be blocked and help cells survive.

Malfunctioning tRNA

While this is the first study to link tRNA to craniofacial malformations, previous studies have shown that mutations that impair ribosome formation can also lead to similar defects. Studies have also shown that disruptions of tRNA synthesis — caused by mutations in the enzymes that attach amino acids to tRNA, or in proteins involved in an earlier step in tRNA splicing — can lead to neurodevelopmental disorders.

“Defects in other components of the tRNA pathway have been shown to be associated with neurodevelopmental disease,” Calo says. “One interesting parallel between these two is that the cells that form the face are coming from the same place as the cells that form the neurons, so it seems that these particular cells are very susceptible to tRNA defects.”

The researchers now hope to explore whether environmental factors linked to orofacial birth defects also influence tRNA function. Some of their preliminary work has found that oxidative stress — a buildup of harmful free radicals — can lead to fragmentation of tRNA molecules. Oxidative stress can occur in embryonic cells upon exposure to ethanol, as in fetal alcohol syndrome, or if the mother develops gestational diabetes.

“I think it is worth looking for mutations that might be causing this on the genetic side of things, but then also in the future, we would expand this into which environmental factors have the same effects on tRNA function, and then see which precautions might be able to prevent any effects on tRNAs,” Bartusel says.

The research was funded by the National Science Foundation Graduate Research Program, the National Cancer Institute, the National Institute of General Medical Sciences, and the Pew Charitable Trusts.


How should we prioritize patients waiting for kidney transplants?

A comprehensive study of the U.S. system could help policymakers analyze methods of matching donated kidneys and their recipients.


At any given time, about 100,000 people in the U.S. are waiting to become kidney transplant recipients. Roughly one-fifth of those get a new kidney each year, but others die while waiting. In short, the demand for kidneys makes it important to think about how we use the limited supply.

A study co-authored by an MIT economist brings new data to this issue, providing nuanced estimates of the lifespan-lengthening effect of kidney transplants. That can be hard to measure well, but the study is the first to account for some of the complexities involved, including the decisions patients make when accepting kidney transplants, and some of their pre-existing health factors.

The research concludes the system in use produces an additional 9.29 life-years from transplantation (LYFT) for kidney recipients. (LYFT is the difference in median survival for those with and without transplants.) If the organs were assigned randomly to patients, the study finds, that LYFT average would only be 7.54 overall. From that perspective, the current transplant system is a net positive for patients. However, the study also finds that the LYFT figure could potentially be raised as high as 14.08, depending on how the matching system is structured.

In any case, more precise estimates about the benefits of kidney transplants can help inform policymakers about the dynamics of the matching system in use.

“There’s always this question about how to take the scarce number of organs being donated and place them efficiently, and place them well,” says MIT economist Nikhil Agarwal, co-author of a newly published paper detailing the study’s results. As he emphasizes, the point of the paper is to inform the ongoing refinement of the matching system, rather than advocate one viewpoint or another.

The paper, “Choices and Outcomes in Assignment Mechanisms: The Allocation of Deceased Donor Kidneys,” is published in the latest issue of Econometrica. The authors are Agarwal, who is a professor in MIT’s Department of Economics; Charles Hodgson, an assistant professor of economics at Yale University; and Paulo Somaini, an associate professor of economics in Stanford University’s Graduate School of Business.

After people die, there is a period lasting up to 48 hours when they could be viable organ donors. Potential kidney recipients are prioritized by time spent on wait-lists as well as tissue-type similarity, and can accept or reject any given transplant offer.

Over the last decade-plus, Agarwal has conducted significant empirical research on matching systems for organ donations, especially kidney transplants. To conduct this study, the researchers used comprehensive data about patients on the kidney wait-list from 2000-2010, made available by the Organ Procurement and Transplantation Network, the national U.S. registry. This allowed the scholars to analyze both the matching system and the health effects of transplants; they track patient survival until February 2020.

The work is the first quasiexperimental study of kidney transplants; by carefully examining the decision-making tendencies of kidney recipients, along with many other health factors, the scholars are able to evaluate the effects of a transplant, other things being equal. Recipients are more likely to select kidney offers from donors who are younger, lacked hypertension, died of head trauma (suggesting their internal organs were healthy), and with whom they have perfect tissue-type matches.

“The [previous] methodology of estimating what are the life-years benefits was not incorporating this selection issue,” Agarwal says.

Additionally, overall, a key empirical feature of kidney transplants is that recipients who are healthier overall tend to have the largest realized life-years benefits from a transplant, meaning that the greatest increase in LYFT is not found in the set of patients with the worst health.

“You might think people who are the sickest and who are most likely to die without an organ are going to benefit the most from it [in added life-years],” Agarwal says. “But there might be some other comorbidity or factor that made them sick, and their body’s going to take a toll on the new organ, so the benefits might not be as large.”

With this in mind, the maximal LYFT number of 14.08 in the study comes from, broadly, a hypothetical scenario in which an increased number of otherwise healthy people receive transplants. Again, the current system tends to prioritize time spent on a wait-list. And some observers might advocate for a system that prioritizes those who are sickest. With all that in mind, the policymaking process for kidney transplants may still involve recognition that the biggest gains in patient life-years are not necessarily aligned with other prioritization factors.

“Our results indicate … a dilemma rooted in the tension between these two goals,” the authors write in the paper.

To be clear, Agarwal is not advocating for any one system over another, but conducting data-driven research so that policy officials can make more fully informed decisions in the ongoing, long-term process of trying to refine valuable transplant networks.

“I don’t necessarily think it’s my comparative advantage to make the ethical decisions, but we can at least think about and quantify what some of the tradeoffs are,” Agarwal adds.

Support for the research was provided in part by the National Science Foundation and by the Alfred P. Sloan Foundation. 


A chemist who tinkers with molecules’ structures

By changing how atoms in a molecule are arranged relative to each other, Associate Professor Alison Wendlandt aims to create compounds with new chemical properties.


Many biological molecules exist as “diastereomers” — molecules that have the same chemical structure but different spatial arrangements of their atoms. In some cases, these slight structural differences can lead to significant changes in the molecules’ functions or chemical properties.

As one example, the cancer drug doxorubicin can have heart-damaging side effects in a small percentage of patients. However, a diastereomer of the drug, known as epirubicin, which has a single alcohol group that points in a different direction, is much less toxic to heart cells.

“There are a lot of examples like that in medicinal chemistry where something that seems small, such as the position of a single atom in space, may actually be really profound,” says Alison Wendlandt, an associate professor of chemistry at MIT.

Wendlandt’s lab is focused on designing new tools that can convert these molecules into different forms.  Her group is also working on similar tools that can change a molecule into a different constitutional isomer — a molecule that has an atom or chemical group located in a different spot, even though it has the same chemical formula as the original.

“If you have a target molecule and you needed to make it without such a tool, you would have to go back to the beginning and make the whole molecule again to get to the final structure that you wanted,” Wendlandt says.

These tools can also lend themselves to creating entirely new molecules that might be difficult or even impossible to build using traditional chemical synthesis techniques.

“We’re focused on a broad suite of selective transformations, the goal being to make the biggest impact on how you might envision making a molecule,” she says. “If you are able to open up access to the interconversion of molecular structures, you can then think completely differently about how you would make a molecule.”

From math to chemistry

As the daughter of two geologists, Wendlandt found herself immersed in science from a young age. Both of her parents worked at the Colorado School of Mines, and family vacations often involved trips to interesting geological formations.

In high school, she found math more appealing than chemistry, and she headed to the University of Chicago with plans to major in mathematics. However, she soon had second thoughts, after encountering abstract math.

“I was good at calculus and the kind of math you need for engineering, but when I got to college and I encountered topology and N-dimensional geometry, I realized I don’t actually have the skills for abstract math. At that point I became a little bit more open-minded about what I wanted to study,” she says.

Though she didn’t think she liked chemistry, an organic chemistry course in her sophomore year changed her mind.

“I loved the problem-solving aspect of it. I have a very, very bad memory, and I couldn’t memorize my way through the class, so I had to just learn it, and that was just so fun,” she says.

As a chemistry major, she began working in a lab focused on “total synthesis,” a research area that involves developing strategies to synthesize a complex molecule, often a natural compound, from scratch.

Although she loved organic chemistry, a lab accident — an explosion that injured a student in her lab and led to temporary hearing loss for Wendlandt — made her hesitant to pursue it further. When she applied to graduate schools, she decided to go into a different branch of chemistry — chemical biology. She studied at Yale University for a couple of years, but she realized that she didn’t enjoy that type of chemistry and left after receiving a master’s degree.

She worked in a lab at the University of Kentucky for a few years, then applied to graduate school again, this time at the University of Wisconsin. There, she worked in an organic chemistry lab, studying oxidation reactions that could be used to generate pharmaceuticals or other useful compounds from petrochemicals.

After finishing her PhD in 2015, Wendlandt went to Harvard University for a postdoc, working with chemistry professor Eric Jacobsen. There, she became interested in selective chemical reactions that generate a particular isomer, and began studying catalysts that could perform glycosylation — the addition of sugar molecules to other molecules — at specific sites.

Editing molecules

Since joining the MIT faculty in 2018, Wendlandt has worked on developing catalysts that can convert a molecule into its mirror image or an isomer of the original.

In 2022, she and her students developed a tool called a stereo-editor, which can alter the arrangement of chemical groups around a central atom known as a stereocenter. This editor consists of two catalysts that work together to first add enough energy to remove an atom from a stereocenter, then replace it with an atom that has the opposite orientation. That energy input comes from a photocatalyst, which converts captured light into energy.

“If you have a molecule with an existing stereocenter, and you need the other enantiomer, typically you would have to start over and make the other enantiomer. But this new method tries to interconvert them directly, so it gives you a way of thinking about molecules as dynamic,” Wendlandt says. “You could generate any sort of three-dimensional structure of that molecule, and then in an independent step later, you could completely reorganize the 3D structure.”

She has also developed tools that can convert common sugars such as glucose into other isomers, including allose and other sugars that are difficult to isolate from natural sources, and tools that can create new isomers of steroids and alcohols. She is now working on ways to convert six-membered carbon rings to seven or eight-membered rings, and to add, subtract, or replace some of the chemical groups attached to the rings.

“I’m interested in creating general tools that will allow us to interconvert static structures. So, that may be taking a certain functional group and moving it to another part of the molecule entirely, or taking large rings and making them small rings,” she says. “Instead of thinking of molecules that we assemble as static, we’re thinking about them now as potentially dynamic structures, which could change how we think about making organic molecules.”

This approach also opens up the possibility of creating brand new molecules that haven’t been seen before, Wendlandt says. This could be useful, for example, to create drug molecules that interact with a target enzyme in just the right way.

“There’s a huge amount of chemical space that’s still unknown, bizarre chemical space that just has not been made. That’s in part because maybe no one has been interested in it, or because it’s just too hard to make that specific thing,” she says. “These kinds of tools give you access to isomers that are maybe not easily made.”


Restoring healthy gene expression with programmable therapeutics

CAMP4 Therapeutics is targeting regulatory RNA, whose role in gene expression was first described by co-founder and MIT Professor Richard Young.


Many diseases are caused by dysfunctional gene expression that leads to too much or too little of a given protein. Efforts to cure those diseases include everything from editing genes to inserting new genetic snippets into cells to injecting the missing proteins directly into patients.

CAMP4 is taking a different approach. The company is targeting a lesser-known player in the regulation of gene expression known as regulatory RNA. CAMP4 co-founder and MIT Professor Richard Young has shown that by interacting with molecules called transcription factors, regulatory RNA plays an important role in controlling how genes are expressed. CAMP4’s therapeutics target regulatory RNA to increase the production of proteins and put patients’ levels back into healthy ranges.

The company’s approach holds promise for treating diseases caused by defects in gene expression, such as metabolic diseases, heart conditions, and neurological disorders. Targeting regulatory RNAs as opposed to genes could also offer more precise treatments than existing approaches.

“If I just want to fix a single gene’s defective protein output, I don’t want to introduce something that makes that protein at high, uncontrolled amounts,” says Young, who is also a core member of the Whitehead Institute. “That’s a huge advantage of our approach: It’s more like a correction than sledgehammer.”

CAMP4’s lead drug candidate targets urea cycle disorders (UCDs), a class of chronic conditions caused by a genetic defect that limits the body’s ability to metabolize and excrete ammonia. A phase 1 clinical trial has shown CAMP4’s treatment is safe and tolerable for humans, and in preclinical studies the company has shown its approach can be used to target specific regulatory RNA in the cells of humans with UCDs to restore gene expression to healthy levels.

“This has the potential to treat very severe symptoms associated with UCDs,” says Young, who co-founded CAMP4 with cancer genetics expert Leonard Zon, a professor at Harvard Medical School. “These diseases can be very damaging to tissues and causes a lot of pain and distress. Even a small effect in gene expression could have a huge benefit to patients, who are generally young.”

Mapping out new therapeutics

Young, who has been a professor at MIT since 1984, has spent decades studying how genes are regulated. It’s long been known that molecules called transcription factors, which orchestrate gene expression, bind to DNA and proteins. Research published in Young’s lab uncovered a previously unknown way in which transcription factors can also bind to RNA. The finding indicated RNA plays an underappreciated role in controlling gene expression.

CAMP4 was founded in 2016 with the initial idea of mapping out the signaling pathways that govern the expression of genes linked to various diseases. But as Young’s lab discovered and then began to characterize the role of regulatory RNA in gene expression around 2020, the company pivoted to focus on targeting regulatory RNA using therapeutic molecules known as antisense oligonucleotides (ASOs), which have been used for years to target specific messenger RNA sequences.

CAMP4 began mapping the active regulatory RNAs associated with the expression of every protein-coding gene and built a database, which it calls its RAP Platform, that helps it quickly identify regulatory RNAs to target  specific diseases and select ASOs that will most effectively bind to those RNAs.

Today, CAMP4 is using its platform to develop therapeutic candidates it believes can restore healthy protein levels to patients.

“The company has always been focused on modulating gene expression,” says CAMP4 Chief Financial Officer Kelly Gold MBA ’09. “At the simplest level, the foundation of many diseases is too much or too little of something being produced by the body. That is what our approach aims to correct.”

Accelerating impact

CAMP4 is starting by going after diseases of the liver and the central nervous system, where the safety and efficacy of ASOs has already been proven. Young believes correcting genetic expression without modulating the genes themselves will be a powerful approach to treating a range of complex diseases.

“Genetics is a powerful indicator of where a deficiency lies and how you might reverse that problem,” Young says. “There are many syndromes where we don’t have a complete understanding of the underlying mechanism of disease. But when a mutation clearly affects the output of a gene, you can now make a drug that can treat the disease without that complete understanding.”

As the company continues mapping the regulatory RNAs associated with every gene, Gold hopes CAMP4 can eventually minimize its reliance on wet-lab work and lean more heavily on machine learning to leverage its growing database and quickly identify regRNA targets for every disease it wants to treat.

In addition to its trials in urea cycle disorders, the company plans to launch key preclinical safety studies for a candidate targeting seizure disorders with a genetic basis, this year. And as the company continues exploring drug development efforts around the thousands of genetic diseases where increasing protein levels are can have a meaningful impact, it’s also considering collaborating with others to accelerate its impact.

“I can conceive of companies using a platform like this to go after many targets, where partners fund the clinical trials and use CAMP4 as an engine to target any disease where there’s a suspicion that gene upregulation or downregulation is the way to go,” Young says.


Beneath the biotech boom

MIT historian Robin Scheffler’s research shows how local regulations helped create certainty and safety principles that enabled an industry’s massive growth.


It’s considered a scientific landmark: A 1975 meeting at the Asilomar Conference Center in Pacific Grove, California, shaped a new safety regime for recombinant DNA, ensuring that researchers would apply caution to gene splicing. Those ideas have been so useful that in the decades since, when new topics in scientific safety arise, there are still calls for Asilomar-type conferences to craft good ground rules.

There’s something missing from this narrative, though: It took more than the Asilomar conference to set today’s standards. The Asilomar concepts were created with academic research in mind — but the biotechnology industry also makes products, and standards for that were formulated after Asilomar.

“The Asilomar meeting and Asilomar principles did not settle the question of the safety of genetic engineering,” says MIT scholar Robin Scheffler, author of a newly published research paper on the subject.

Instead, as Scheffler documents in the paper, Asilomar helped generate further debate, but those industry principles were set down later in the 1970s — first in Cambridge, Massachusetts, where politicians and concerned citizens wanted local biotech firms to be good neighbors. In response, the city passed safety laws for the emerging industry. And rather than heading off to places with zero regulations, local firms — including a fledgling Biogen — stayed put. Over the decades, the Boston area became the world leader in biotech.

Why stay? In essence, regulations gave biotech firms the certainty they needed to grow — and build. Lenders and real-estate developers needed signals that long-term investment in labs and facilities made sense. Generally, as Scheffler notes, even though “the idea that regulations can be anchoring for business does not have a lot of pull” in economic theory, in this case, regulations did matter.

“The trajectory of the industry in Cambridge, including biotechnology companies deciding to accommodate regulation, is remarkable,” says Scheffler. “It’s hard to imagine the American biotechnology industry without this dense cluster in Boston and Cambridge. These things that happened on a very local scale had huge echoes.”

Scheffler’s article, “Asilomar Goes Underground: The Long Legacy of Recombinant DNA Hazard Debates for the Greater Boston Area Biotechnology Industry,” appears in the latest issue of the Journal of the History of Biology. Scheffler is an associate professor in MIT’s Program in Science, Technology, and Society.

Business: Banking on certainty

To be clear, the Asilomar conference of 1975 did produce real results. Asilomar led to a system that helped evaluate projects’ potential risk and determine appropriate safety measures. The U.S. federal government subsequently adopted Asilomar-like principles for research it funded.

But in 1976, debate over the subject arose again in Cambridge, especially following a cover story in a local newspaper, the Boston Phoenix. Residents became concerned that recombinant DNA projects would lead to, hypothetically, new microorganisms that could damage public health.

“Scientists had not considered urban public health,” Scheffler says. “The Cambridge recombinant DNA debate in the 1970s made it a matter of what your neighbors think.”

After several months of hearings, research, and public debate (sometimes involving MIT faculty) stretching into early 1977, Cambridge adopted a somewhat stricter framework than the federal government had proposed for the handling of materials used in recombinant DNA work.

“Asilomar took on a new life in local regulations,” says Scheffler, whose research included government archives, news accounts, industry records, and more.

But a funny thing happened after Cambridge passed its recombinant DNA rules: The nascent biotech industry took root, and other area towns passed their own versions of the Cambridge rules.

“Not only did cities create more safety regulations,” Scheffler observes, “but the people asking for them switched from being left-wing activists or populist mayors to the Massachusetts Biotechnology Council and real estate development concerns.”

Indeed, he adds, “What’s interesting is how quickly safety concerns about recombinant DNA evaporated. Many people against recombinant DNA came to change their thinking.” And while some local residents continued to express concerns about the environmental impact of labs, “those are questions people ask when they no longer worry about the safety of the core work itself.”

Unlike federal regulations, these local laws applied to not only lab research but also products, and as such they let firms know they could work in a stable business environment with regulatory certainty. That mattered financially, and in a specific way: It helped companies build the buildings they needed to produce the products they had invented.

“The venture capital cycle for biotechnology companies was very focused on the research and exciting intellectual ideas, but then you have the bricks and mortar,” Scheffler says, referring to biotech production facilities. “The bricks and mortar is actually the harder problem for a lot of startup biotechnology companies.”

After all, he notes, “Venture capital will throw money after big discoveries, but a banker issuing a construction loan has very different priorities and is much more sensitive to things like factory permits and access to sewers 10 years from now. That’s why all these towns around Massachusetts passed regulations, as a way of assuring that.”

To grow globally, act locally

Of course, one additional reason biotech firms decided to land in the Boston area was the intellectual capital: With so many local universities, there was a lot of industry talent in the region. Local faculty co-founded some of the high-flying firms.

“The defining trait of the Cambridge-Boston biotechnology cluster is its density, right around the universities,” Scheffler says. “That’s a unique feature local regulations encouraged.”

It’s also the case, Scheffler notes, that some biotech firms did engage in venue-shopping to avoid regulations at first, although that was more the case in California, another state where the industry emerged. Still, the Boston-area regulations seemed to assuage both industry and public worries about the subject.

The foundations of biotechnology regulation in Massachusetts contain some additional historical quirks, including the time in the late 1970s when the city of Cambridge mistakenly omitted the recombinant DNA safety rules from its annually published bylaws, meaning the regulations were inactive. Officials at Biogen sent them a reminder to restore the laws to the books.

Half a century on from Asilomar, its broad downstream effects are not just a set of research principles — but also, refracted through the Cambridge episode, key ideas about public discussion and input; reducing uncertainty for business; the particular financing needs of industries; the impact of local and regional regulation; and the openness of startups to recognizing what might help them thrive.

“It’s a different way to think about the legacy of Asilomar,” Scheffler says. “And it’s a real contrast with what some people might expect from following scientists alone.” 


A faster way to solve complex planning problems

By eliminating redundant computations, a new data-driven method can streamline processes like scheduling trains, routing delivery drivers, or assigning airline crews.


When some commuter trains arrive at the end of the line, they must travel to a switching platform to be turned around so they can depart the station later, often from a different platform than the one at which they arrived.

Engineers use software programs called algorithmic solvers to plan these movements, but at a station with thousands of weekly arrivals and departures, the problem becomes too complex for a traditional solver to unravel all at once.

Using machine learning, MIT researchers have developed an improved planning system that reduces the solve time by up to 50 percent and produces a solution that better meets a user’s objective, such as on-time train departures. The new method could also be used for efficiently solving other complex logistical problems, such as scheduling hospital staff, assigning airline crews, or allotting tasks to factory machines.

Engineers often break these kinds of problems down into a sequence of overlapping subproblems that can each be solved in a feasible amount of time. But the overlaps cause many decisions to be needlessly recomputed, so it takes the solver much longer to reach an optimal solution.

The new, artificial intelligence-enhanced approach learns which parts of each subproblem should remain unchanged, freezing those variables to avoid redundant computations. Then a traditional algorithmic solver tackles the remaining variables.

“Often, a dedicated team could spend months or even years designing an algorithm to solve just one of these combinatorial problems. Modern deep learning gives us an opportunity to use new advances to help streamline the design of these algorithms. We can take what we know works well, and use AI to accelerate it,” says Cathy Wu, the Thomas D. and Virginia W. Cabot Career Development Associate Professor in Civil and Environmental Engineering (CEE) and the Institute for Data, Systems, and Society (IDSS) at MIT, and a member of the Laboratory for Information and Decision Systems (LIDS).

She is joined on the paper by lead author Sirui Li, an IDSS graduate student; Wenbin Ouyang, a CEE graduate student; and Yining Ma, a LIDS postdoc. The research will be presented at the International Conference on Learning Representations.

Eliminating redundance

One motivation for this research is a practical problem identified by a master’s student Devin Camille Wilkins in Wu’s entry-level transportation course. The student wanted to apply reinforcement learning to a real train-dispatch problem at Boston’s North Station. The transit organization needs to assign many trains to a limited number of platforms where they can be turned around well in advance of their arrival at the station.

This turns out to be a very complex combinatorial scheduling problem — the exact type of problem Wu’s lab has spent the past few years working on.

When faced with a long-term problem that involves assigning a limited set of resources, like factory tasks, to a group of machines, planners often frame the problem as Flexible Job Shop Scheduling.

In Flexible Job Shop Scheduling, each task needs a different amount of time to complete, but tasks can be assigned to any machine. At the same time, each task is composed of operations that must be performed in the correct order.

Such problems quickly become too large and unwieldy for traditional solvers, so users can employ rolling horizon optimization (RHO) to break the problem into manageable chunks that can be solved faster.

With RHO, a user assigns an initial few tasks to machines in a fixed planning horizon, perhaps a four-hour time window. Then, they execute the first task in that sequence and shift the four-hour planning horizon forward to add the next task, repeating the process until the entire problem is solved and the final schedule of task-machine assignments is created.

A planning horizon should be longer than any one task’s duration, since the solution will be better if the algorithm also considers tasks that will be coming up.

But when the planning horizon advances, this creates some overlap with operations in the previous planning horizon. The algorithm already came up with preliminary solutions to these overlapping operations.

“Maybe these preliminary solutions are good and don’t need to be computed again, but maybe they aren’t good. This is where machine learning comes in,” Wu explains.

For their technique, which they call learning-guided rolling horizon optimization (L-RHO), the researchers teach a machine-learning model to predict which operations, or variables, should be recomputed when the planning horizon rolls forward.

L-RHO requires data to train the model, so the researchers solve a set of subproblems using a classical algorithmic solver. They took the best solutions — the ones with the most operations that don’t need to be recomputed — and used these as training data.

Once trained, the machine-learning model receives a new subproblem it hasn’t seen before and predicts which operations should not be recomputed. The remaining operations are fed back into the algorithmic solver, which executes the task, recomputes these operations, and moves the planning horizon forward. Then the loop starts all over again.

“If, in hindsight, we didn’t need to reoptimize them, then we can remove those variables from the problem. Because these problems grow exponentially in size, it can be quite advantageous if we can drop some of those variables,” she adds.

An adaptable, scalable approach

To test their approach, the researchers compared L-RHO to several base algorithmic solvers, specialized solvers, and approaches that only use machine learning. It outperformed them all, reducing solve time by 54 percent and improving solution quality by up to 21 percent.

In addition, their method continued to outperform all baselines when they tested it on more complex variants of the problem, such as when factory machines break down or when there is extra train congestion. It even outperformed additional baselines the researchers created to challenge their solver.

“Our approach can be applied without modification to all these different variants, which is really what we set out to do with this line of research,” she says.

L-RHO can also adapt if the objectives change, automatically generating a new algorithm to solve the problem — all it needs is a new training dataset.

In the future, the researchers want to better understand the logic behind their model’s decision to freeze some variables, but not others. They also want to integrate their approach into other types of complex optimization problems like inventory management or vehicle routing.

This work was supported, in part, by the National Science Foundation, MIT’s Research Support Committee, an Amazon Robotics PhD Fellowship, and MathWorks.


MIT Lincoln Laboratory is a workhorse for national security

The US Air Force and MIT renew contract for operating the federally funded R&D center, a long-standing asset for defense innovation and prototyping.


In 1949, the U.S. Air Force called upon MIT with an urgent need. Soviet aircraft carrying atomic bombs were capable of reaching the U.S. homeland, and the nation was defenseless. A dedicated center — MIT Lincoln Laboratory — was established. The brightest minds from MIT came together in service to the nation, making scientific and engineering leaps to prototype the first real-time air defense system. The commercial sector and the U.S. Department of Defense (DoD) then produced and deployed the system, called SAGE, continent-wide.

The SAGE story still describes MIT Lincoln Laboratory’s approach to national security innovation today. The laboratory works with DoD agencies to identify challenging national security gaps, determines if technology can contribute to a solution, and then executes an R&D program to advance critical technologies. The principal products of these programs are advanced technology prototypes, which are often rapidly fabricated and demonstrated through test and evaluation.

Throughout this process, the laboratory closely coordinates with the DoD and other federal agency sponsors, and then transfers the technology in many forms to industry for manufacturing at scale to meet national needs. For nearly 75 years, these technologies have saved lives, responded to emergencies, fueled the nation’s economy, and impacted the daily life of Americans and our allies. 

"Lincoln Laboratory accelerates the pace of national security technology development, in partnership with the government, private industry, and the broader national security ecosystem," says Melissa Choi, director of MIT Lincoln Laboratory. "We integrate high-performance teams with advanced facilities and the best technology available to bring novel prototypes to life, providing lasting benefits to the United States."

The Air Force and MIT recently renewed their contract for the continued operation of Lincoln Laboratory. The contract was awarded by the Air Force Lifecycle Management Center Strategic Services Division on Hanscom Air Force Base for a term of five years, with an option for an additional five years. Since Lincoln Laboratory’s founding, MIT has operated the laboratory in the national interest for no fee and strictly on a cost-reimbursement basis. The contract award is indicative of the DoD’s continuing recognition of the long-term value of, and necessity for, cutting-edge R&D in service of national security.

Critical contributions to national security

MIT Lincoln Laboratory is the DoD’s largest federally funded research and development center R&D laboratory. Sponsored by the under secretary of defense for research and engineering, it contributes to a broad range of national security missions and domains.

Among the most critical domains are air and missile defense. Laboratory researchers pioneer advanced radar systems and algorithms crucial for detecting, tracking, and targeting ballistic missiles and aircraft, and serve as scientific advisors to the Reagan Test Site. They also conduct comprehensive studies on missile defense needs, such as the recent National Defense Authorization Act–directed study on the defense of Guam, and provide actionable insights to Congress.  

MIT Lincoln Laboratory is also at the forefront of space systems and technologies, enabling the military to monitor space activities and communicate at very high bandwidths. Laboratory engineers developed the innovatively curved detector within the Space Surveillance Telescope that allows the U.S. Space Force to track tiny space objects. It also operates the world's highest-resolution long-range radar for imaging satellites. Recently, the laboratory worked closely with NASA to demonstrate laser communications systems in space, setting a record for the fastest satellite downlink and farthest lasercom link ever achieved. These breakthroughs are heralding a new era in satellite communications for defense and civil missions.

Perhaps most importantly, MIT Lincoln Laboratory is asked to rapidly prototype solutions to urgent and emerging threats. These solutions are both transferred to industry for production and fielded directly to war-fighters, saving lives. To combat improvised explosive devices in Iraq and Afghanistan, the laboratory quickly and iteratively developed several novel systems to detect and defeat explosive devices and insurgent networks. When insurgents were attacking forward-operating bases at night, the laboratory developed an advanced infrared camera system to prevent the attacks. Like other multi-use technologies developed at the laboratory, that system led to a successful commercial startup, which was recently acquired by Anduril.

Responding to domestic crises is also a key part of the laboratory’s mission. After the attacks of 9/11/2001, the laboratory quickly integrated a system to defend the airspace around critical locations in the capital region. More recently, the laboratory’s application of AI to video forensics and physical screening has resulted in commercialized systems deployed in airports and mass transit settings. Over the last decade, the laboratory has adapted its technology for many other homeland security needs, including responses to natural disasters. As one example, researchers repurposed a world-class lidar system first used by the military for terrain mapping to quickly quantify damage after hurricanes.

For all of these efforts, the laboratory exercises responsible stewardship of taxpayer funds, identifying multiple uses for the technologies it develops and introducing disruptive approaches to reduce costs for the government. Sometimes, the system architecture or design results in cost savings, as is the case with the U.S. Air Force's SensorSat; the laboratory’s unique sensor design enabled a satellite 10 times smaller and cheaper than those typically used for space surveillance. Another approach is by creating novel systems from low-cost components. For instance, laboratory researchers discovered a way to make phased-array radars using cell phone electronics instead of traditional expensive components, greatly reducing the cost of deploying the radars for weather and aircraft surveillance.

The laboratory also pursues emerging technology to bring about transformative solutions. In the 1960s, such vision brought semiconductor lasers into the world, and in the 1990s shrunk transistors more than industry imagined possible. Today, laboratory staff are pursuing other new realms: making imagers reconfigurable at the pixel level, designing quantum sensors to transform navigation technology, and developing superconducting electronics to improve computing efficiency.

A long, beneficial relationship between MIT and the DoD

"Lincoln Laboratory has created a deep understanding and knowledge base in core national security missions and associated technologies. We look forward to continuing to work closely with government sponsors, industry, and academia through our trusted, collaborative relationships to address current and future national security challenges and ensure technological superiority," says Scott Anderson, assistant director for operations at MIT Lincoln Laboratory.

"MIT has always been proud to support the nation through its operation of Lincoln Laboratory. The long-standing relationship between MIT and the Department of Defense through this storied laboratory has been a difference-maker for the safety, economy, and industrial power of the United States, and we look forward to seeing the innovations ahead of us," notes Ian Waitz, MIT vice president for research.

Under the terms of the renewed contract, MIT will ensure that Lincoln Laboratory remains ready to meet R&D challenges that are critical to national security.


A visual pathway in the brain may do more than recognize objects

New research using computational vision models suggests the brain’s “ventral stream” might be more versatile than previously thought.


When visual information enters the brain, it travels through two pathways that process different aspects of the input. For decades, scientists have hypothesized that one of these pathways, the ventral visual stream, is responsible for recognizing objects, and that it might have been optimized by evolution to do just that.

Consistent with this, in the past decade, MIT scientists have found that when computational models of the anatomy of the ventral stream are optimized to solve the task of object recognition, they are remarkably good predictors of the neural activities in the ventral stream.

However, in a new study, MIT researchers have shown that when they train these types of models on spatial tasks instead, the resulting models are also quite good predictors of the ventral stream’s neural activities. This suggests that the ventral stream may not be exclusively optimized for object recognition.

“This leaves wide open the question about what the ventral stream is being optimized for. I think the dominant perspective a lot of people in our field believe is that the ventral stream is optimized for object recognition, but this study provides a new perspective that the ventral stream could be optimized for spatial tasks as well,” says MIT graduate student Yudi Xie.

Xie is the lead author of the study, which will be presented at the International Conference on Learning Representations. Other authors of the paper include Weichen Huang, a visiting student through MIT’s Research Science Institute program; Esther Alter, a software engineer at the MIT Quest for Intelligence; Jeremy Schwartz, a sponsored research technical staff member; Joshua Tenenbaum, a professor of brain and cognitive sciences; and James DiCarlo, the Peter de Florez Professor of Brain and Cognitive Sciences, director of the Quest for Intelligence, and a member of the McGovern Institute for Brain Research at MIT.

Beyond object recognition

When we look at an object, our visual system can not only identify the object, but also determine other features such as its location, its distance from us, and its orientation in space. Since the early 1980s, neuroscientists have hypothesized that the primate visual system is divided into two pathways: the ventral stream, which performs object-recognition tasks, and the dorsal stream, which processes features related to spatial location.

Over the past decade, researchers have worked to model the ventral stream using a type of deep-learning model known as a convolutional neural network (CNN). Researchers can train these models to perform object-recognition tasks by feeding them datasets containing thousands of images along with category labels describing the images.

The state-of-the-art versions of these CNNs have high success rates at categorizing images. Additionally, researchers have found that the internal activations of the models are very similar to the activities of neurons that process visual information in the ventral stream. Furthermore, the more similar these models are to the ventral stream, the better they perform at object-recognition tasks. This has led many researchers to hypothesize that the dominant function of the ventral stream is recognizing objects.

However, experimental studies, especially a study from the DiCarlo lab in 2016, have found that the ventral stream appears to encode spatial features as well. These features include the object’s size, its orientation (how much it is rotated), and its location within the field of view. Based on these studies, the MIT team aimed to investigate whether the ventral stream might serve additional functions beyond object recognition.

“Our central question in this project was, is it possible that we can think about the ventral stream as being optimized for doing these spatial tasks instead of just categorization tasks?” Xie says.

To test this hypothesis, the researchers set out to train a CNN to identify one or more spatial features of an object, including rotation, location, and distance. To train the models, they created a new dataset of synthetic images. These images show objects such as tea kettles or calculators superimposed on different backgrounds, in locations and orientations that are labeled to help the model learn them.

The researchers found that CNNs that were trained on just one of these spatial tasks showed a high level of “neuro-alignment” with the ventral stream — very similar to the levels seen in CNN models trained on object recognition.

The researchers measure neuro-alignment using a technique that DiCarlo’s lab has developed, which involves asking the models, once trained, to predict the neural activity that a particular image would generate in the brain. The researchers found that the better the models performed on the spatial task they had been trained on, the more neuro-alignment they showed.

“I think we cannot assume that the ventral stream is just doing object categorization, because many of these other functions, such as spatial tasks, also can lead to this strong correlation between models’ neuro-alignment and their performance,” Xie says. “Our conclusion is that you can optimize either through categorization or doing these spatial tasks, and they both give you a ventral-stream-like model, based on our current metrics to evaluate neuro-alignment.”

Comparing models

The researchers then investigated why these two approaches — training for object recognition and training for spatial features — led to similar degrees of neuro-alignment. To do that, they performed an analysis known as centered kernel alignment (CKA), which allows them to measure the degree of similarity between representations in different CNNs. This analysis showed that in the early to middle layers of the models, the representations that the models learn are nearly indistinguishable.

“In these early layers, essentially you cannot tell these models apart by just looking at their representations,” Xie says. “It seems like they learn some very similar or unified representation in the early to middle layers, and in the later stages they diverge to support different tasks.”

The researchers hypothesize that even when models are trained to analyze just one feature, they also take into account “non-target” features — those that they are not trained on. When objects have greater variability in non-target features, the models tend to learn representations more similar to those learned by models trained on other tasks. This suggests that the models are using all of the information available to them, which may result in different models coming up with similar representations, the researchers say.

“More non-target variability actually helps the model learn a better representation, instead of learning a representation that’s ignorant of them,” Xie says. “It’s possible that the models, although they’re trained on one target, are simultaneously learning other things due to the variability of these non-target features.”

In future work, the researchers hope to develop new ways to compare different models, in hopes of learning more about how each one develops internal representations of objects based on differences in training tasks and training data.

“There could be still slight differences between these models, even though our current way of measuring how similar these models are to the brain tells us they’re on a very similar level. That suggests maybe there’s still some work to be done to improve upon how we can compare the model to the brain, so that we can better understand what exactly the ventral stream is optimized for,” Xie says.

The research was funded by the Semiconductor Research Corporation and the U.S. Defense Advanced Research Projects Agency.


Bringing manufacturing back to America, one fab lab at a time

A collaborative network of makerspaces has spread from MIT across the country, helping communities make their own products.


Reindustrializing America will require action from not only businesses but also a new wave of people that have the skills, experience, and drive to make things. While many efforts in this area have focused on top-down education and manufacturing initiatives, an organic, grassroots movement has been inspiring a new generation of makers across America for the last 20 years.

The first fab lab was started in 2002 by MIT’s Center for Bits and Atoms (CBA). To teach students to use the digital fabrication research facility, CBA’s leaders began teaching a rapid-prototyping class called MAS.863 (How To Make (almost) Anything). In response to overwhelming demand, CBA collaborated with civil rights activist and MIT adjunct professor Mel King to create a community-scale version of the lab, integrating tools for 3D printing and scanning, laser cutting, precision and large-format machining, molding and casting, and surface-mount electronics, as well as design software.

That was supposed to be the end of the story; they didn’t expect a maker movement. Then another community reached out to get help building their own fab lab. Then another. Today there are hundreds of U.S. fab labs, in nearly every state, in locations ranging from community college campuses to Main Street. The fab labs offer open access to tools and software, as well as education, training, and community to people from all backgrounds.

“In the fab labs you can make almost anything,” says Professor and CBA Director Neil Gershenfeld. “That doesn’t mean everybody will make everything, but they can make things for themselves and their communities. The success of the fab labs suggests the real way to bring manufacturing back to America is not as it was. This is a different notion of agile, just-in-time manufacturing that’s personalized, distributed, and doesn’t have a sharp boundary between producer and consumer.”

Communities of makers

A fab lab opened at Florida A&M University about a year ago, but it didn’t take long for faculty and staff to notice its impact on their students. Denaria Pringley, an elementary education teacher with no experience in STEM, first came to the lab as part of a class requirement. That’s when she realized she could build her own guitar. In a pattern that has repeated itself across the country, Pringley began coming to the lab on nights and weekends, 3D-printing the body of the guitar, drilling together the neck, sanding and polishing the finish, laser engraving pick guards, and stringing everything together. Today, she works in the fab lab and knows how to run every machine in the space.

“Her entire disposition transformed through the fab lab,” says FAMU Dean of Education Sarah Price. “Every day, students make something new. There’s so much creativity going on in the lab it astounds me.”

Gershenfeld says describing how the fab labs work is a bit like describing how the internet works. At a high level, fab labs are spaces to play, create, learn, mentor, and invent. As they started replicating, Gershenfeld and his colleague Sherry Lassiter started the Fab Foundation, a nonprofit that provides operational, technical, and logistical assistance to labs. Last year, The Boston Globe called the global network of thousands of fab labs one of MIT’s most influential contributions of the last 25 years.

Some fab labs are housed in colleges. Others are funded by local governments, businesses, or through donations. Even fab labs operated in part by colleges can be open to anyone, and many of those fab labs partner with surrounding K-12 schools and continuing education programs.

Increasingly, corporate social responsibility programs are investing in fab labs, giving their communities spaces for STEM education, workforce development, and economic development. For instance, Chevron supported the startup of the fab lab at FAMU. Lassiter, the president of the Fab Foundation, notes, “Fab labs have evolved to become community anchor organizations, building strong social connections and resilience in addition to developing technical skills and providing public access to manufacturing capabilities.”

“We’re a community resource,” says Eric Saliim, who serves as a program manager at the fab lab housed in North Carolina Central University. “We have no restrictions for how you can use our fab lab. People make everything from art to car parts, products for their home, fashion accessories, you name it.”

Many fab lab instructors say the labs are a powerful way to make abstract concepts real and spark student interest in STEM subjects.

“More schools should be using fab labs to get kids interested in computer science and coding,” says Scott Simenson, former director of the fab lab at Century College in Minnesota. “This world is going to get a lot more digitally sophisticated, and we need a workforce that’s not only highly trained but also educated around subjects like computer science and artificial intelligence.”

Century College opened its fab lab in 2004 amid years of declining enrollment in its engineering and design programs.

“It’s a great bridge between the theoretical and the applied,” Simenson explains. “Frankly, it helped a lot of engineering students who were disgruntled because they felt like they didn’t get to make enough things with their hands.”

The fab lab has since helped support the creation of Century College programs in digital and additive manufacturing, welding, and bioprinting.

"Working in fab labs establishes a growth mindset for our community as well as our students,” says Kelly Zelesnik, the dean of Lorain County Community College in Ohio. “Students are so under-the-gun to get it right and the grade that they lose sight of the learning. But when they’re in the fab lab, they’re iterating, because nothing ever works the first time."

In addition to offering access to equipment and education, fab labs foster education, mentorship, and innovation. Businesses often use local fab labs to make prototypes or test new products. Students have started businesses around their art and fashion creations.

Rick Pollack was a software entrepreneur and frequent visitor to the fab lab at Lorain County Community College. Pollack became fascinated with 3D printers and eventually started the additive manufacturing company MakerGear after months of tinkering with the machines in the lab in 2009. MakerGear quickly became one of the most popular producers of 3D printers in the country.

“Everyone wants to talk about innovation with STEM education and business incubation,” Gershenfeld says. “This is delivering on that by filling in the missing scaffolding: the means of production.”

Manufacturing reimagined

Many fab labs begin with tiny spaces in forgotten corners of buildings and campuses. Over time, they attract a motley crew of people that have often struggled in structured, hierarchical classroom settings. Eventually, they become hubs for people of all backgrounds driven by making.

“Fab labs provide access to tools, but what’s really driving their success is the culture of peer-to-peer, project-based learning and production,” Gershenfeld says. “Fab labs don’t separate basic and applied work, short- and long-term goals, play and problem solving. The labs are a very bottom-up distribution of the culture at MIT.”

While the local maker movement won’t replace mass manufacturing, Gershenfeld says that mass manufacturing produces goods for consumers who all want the same thing, while local production can make more interesting things that differ for individuals.

Moreover, Gershenfeld doesn’t believe you can measure the impact of fab labs by looking only at the things produced.

“A significant part of the benefit of these labs is the act of making itself,” he says. “For instance, a fab lab in Detroit led by Blair Evans worked with at-risk youth, delivering better life outcomes than conventional social services. These labs attract interest and then build skills and communities, and so along with the things that get made, the community-building, the knowledge, the connecting, is all as important as the immediate economic impact.”


Hundred-year storm tides will occur every few decades in Bangladesh, scientists report

With projected global warming, the frequency of extreme storms will ramp up by the end of the century, according to a new study.


Tropical cyclones are hurricanes that brew over the tropical ocean and can travel over land, inundating coastal regions. The most extreme cyclones can generate devastating storm tides — seawater that is heightened by the tides and swells onto land, causing catastrophic flood events in coastal regions. A new study by MIT scientists finds that, as the planet warms, the recurrence of destructive storm tides will increase tenfold for one of the hardest-hit regions of the world.

In a study appearing today in One Earth, the scientists report that, for the highly populated coastal country of Bangladesh, what was once a 100-year event could now strike every 10 years — or more often — by the end of the century. 

In a future where fossil fuels continue to burn as they do today, what was once considered a catastrophic, once-in-a-century storm tide will hit Bangladesh, on average, once per decade. And the kind of storm tides that have occurred every decade or so will likely batter the country’s coast more frequently, every few years.

Bangladesh is one of the most densely populated countries in the world, with more than 171 million people living in a region roughly the size of New York state. The country has been historically vulnerable to tropical cyclones, as it is a low-lying delta that is easily flooded by storms and experiences a seasonal monsoon. Some of the most destructive floods in the world have occurred in Bangladesh, where it’s been increasingly difficult for agricultural economies to recover.

The study also finds that Bangladesh will likely experience tropical cyclones that overlap with the months-long monsoon season. Until now, cyclones and the monsoon have occurred at separate times during the year. But as the planet warms, the scientists’ modeling shows that cyclones will push into the monsoon season, causing back-to-back flooding events across the country.

“Bangladesh is very active in preparing for climate hazards and risks, but the problem is, everything they’re doing is more or less based on what they’re seeing in the present climate,” says study co-author Sai Ravela, principal research scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “We are now seeing an almost tenfold rise in the recurrence of destructive storm tides almost anywhere you look in Bangladesh. This cannot be ignored. So, we think this is timely, to say they have to pause and revisit how they protect against these storms.”

Ravela’s co-authors are Jiangchao Qiu, a postdoc in EAPS, and Kerry Emanuel, professor emeritus of atmospheric science at MIT.

Height of tides

In recent years, Bangladesh has invested significantly in storm preparedness, for instance in improving its early-warning system, fortifying village embankments, and increasing access to community shelters. But such preparations have generally been based on the current frequency of storms.

In this new study, the MIT team aimed to provide detailed projections of extreme storm tide hazards, which are flooding events where tidal effects amplify cyclone-induced storm surge, in Bangladesh under various climate-warming scenarios and sea-level rise projections.

“A lot of these events happen at night, so tides play a really strong role in how much additional water you might get, depending on what the tide is,” Ravela explains.

To evaluate the risk of storm tide, the team first applied a method of physics-based downscaling, which Emanuel’s group first developed over 20 years ago and has been using since to study hurricane activity in different parts of the world. The technique involves a low-resolution model of the global ocean and atmosphere that is embedded with a finer-resolution model that simulates weather patterns as detailed as a single hurricane. The researchers then scatter hurricane “seeds” in a region of interest and run the model forward to observe which seeds grow and make landfall over time.

To the downscaled model, the researchers incorporated a hydrodynamical model, which simulates the height of a storm surge, given the pattern and strength of winds at the time of a given storm. For any given simulated storm, the team also tracked the tides, as well as effects of sea level rise, and incorporated this information into a numerical model that calculated the storm tide, or the height of the water, with tidal effects as a storm makes landfall.

Extreme overlap

With this framework, the scientists simulated tens of thousands of potential tropical cyclones near Bangladesh, under several future climate scenarios, ranging from one that resembles the current day to one in which the world experiences further warming as a result of continued fossil fuel burning. For each simulation, they recorded the maximum storm tides along the coast of Bangladesh and noted the frequency of storm tides of various heights in a given climate scenario.

“We can look at the entire bucket of simulations and see, for this storm tide of say, 3 meters, we saw this many storms, and from that you can figure out the relative frequency of that kind of storm,” Qiu says. “You can then invert that number to a return period.”

A return period is the time it takes for a storm of a particular type to make landfall again. A storm that is considered a “100-year event” is typically more powerful and destructive, and in this case, creates more extreme storm tides, and therefore more catastrophic flooding, compared to a 10-year event.

From their modeling, Ravela and his colleagues found that under a scenario of increased global warming, the storms that previously were considered 100-year events, producing the highest storm tide values, can recur every decade or less by late-century. They also observed that, toward the end of this century, tropical cyclones in Bangladesh will occur across a broader seasonal window, potentially overlapping in certain years with the seasonal monsoon season.

“If the monsoon rain has come in and saturated the soil, a cyclone then comes in and it makes the problem much worse,” Ravela says. “People won’t have any reprieve between the extreme storm and the monsoon. There are so many compound and cascading effects between the two. And this only emerges because warming happens.”

Ravela and his colleagues are using their modeling to help experts in Bangladesh better evaluate and prepare for a future of increasing storm risk. And he says that the climate future for Bangladesh is in some ways not unique to this part of the world.

“This climate change story that is playing out in Bangladesh in a certain way will be playing out in a different way elsewhere,” Ravela notes. “Maybe where you are, the story is about heat stress, or amplifying droughts, or wildfires. The peril is different. But the underlying catastrophe story is not that different.”

This research is supported in part by the MIT Climate Resilience Early Warning Systems Climate Grand Challenges project, the Jameel Observatory JO-CREWSNet project; MIT Weather and Climate Extremes Climate Grand Challenges project; and Schmidt Sciences, LLC. 


Engineered bacteria emit signals that can be spotted from a distance

These bacteria, which could be designed to detect pollution or nutrients, could act as sensors to help farmers monitor their crops.


Bacteria can be engineered to sense a variety of molecules, such as pollutants or soil nutrients. In most cases, however, these signals can only be detected by looking at the cells under a microscope, making them impractical for large-scale use.

Using a new method that triggers cells to produce molecules that generate unique combinations of color, MIT engineers have shown that they can read out these bacterial signals from as far as 90 meters away. Their work could lead to the development of bacterial sensors for agricultural and other applications, which could be monitored by drones or satellites.

“It’s a new way of getting information out of the cell. If you’re standing next to it, you can’t see anything by eye, but from hundreds of meters away, using specific cameras, you can get the information when it turns on,” says Christopher Voigt, head of MIT’s Department of Biological Engineering and the senior author of the new study.

In a paper appearing today in Nature Biotechnology, the researchers showed that they could engineer two different types of bacteria to produce molecules that give off distinctive wavelengths of light across the visible and infrared spectra of light, which can be imaged with hyperspectral cameras. These reporting molecules were linked to genetic circuits that detect nearby bacteria, but this approach could also be combined with any existing sensor, such as those for arsenic or other contaminants, the researchers say.

“The nice thing about this technology is that you can plug and play whichever sensor you want,” says Yonatan Chemla, an MIT postdoc who is one of the lead authors of the paper. “There is no reason that any sensor would not be compatible with this technology.”

Itai Levin PhD ’24 is also a lead author of the paper. Other authors include former undergraduate students Yueyang Fan ’23 and Anna Johnson ’22, and Connor Coley, an associate professor of chemical engineering at MIT.

Hyperspectral imaging

There are many ways to engineer bacterial cells so that they can sense a particular chemical. Most of these work by connecting detection of a molecule to an output such as green fluorescent protein (GFP). These work well for lab studies, but such sensors can’t be measured from long distances.

For long-distance sensing, the MIT team came up with the idea to engineer cells to produce hyperspectral reporter molecules, which can be detected using hyperspectral cameras. These cameras, which were first invented in the 1970s, can determine how much of each color wavelength is present in any given pixel. Instead of showing up as simply red or green, each pixel contains information on hundreds different wavelengths of light.

Currently, hyperspectral cameras are used for applications such as detecting the presence of radiation. In the areas around Chernobyl, these cameras have been used to measure slight color changes that radioactive metals produce in the chlorophyll of plant cells. Hyperspectral cameras are also used to look for signs of malnutrition or pathogen invasion in plants.

That work inspired the MIT team to explore whether they could engineer bacterial cells to produce hyperspectral reporters when they detect a target molecule.

For a hyperspectral reporter to be most useful, it should have a spectral signature with peaks in multiple wavelengths of light, making it easier to detect. The researchers performed quantum calculations to predict the hyperspectral signatures of about 20,000 naturally occurring cell molecules, allowing them to identify those with the most unique patterns of light emission. Another key feature is the number of enzymes that would need to be engineered into a cell to get it to produce the reporter — a trait that will vary for different types of cells.

“The ideal molecule is one that’s really different from everything else, making it detectable, and requires the fewest number of enzymes to produce it in the cell,” Voigt says.

In this study, the researchers identified two different molecules that were best suited for two types of bacteria. For a soil bacterium called Pseudomonas putida, they used a reporter called biliverdin — a pigment that results from the breakdown of heme. For an aquatic bacterium called Rubrivivax gelatinosus, they used a type of bacteriochlorophyll. For each bacterium, the researchers engineered the enzymes necessary to produce the reporter into the host cell, then linked them to genetically engineered sensor circuits.

“You could add one of these reporters to a bacterium or any cell that has a genetically encoded sensor in its genome. So, it might respond to metals or radiation or toxins in the soil, or nutrients in the soil, or whatever it is you want it to respond to. Then the output of that would be the production of this molecule that can then be sensed from far away,” Voigt says.

Long-distance sensing

In this study, the researchers linked the hyperspectral reporters to circuits designed for quorum sensing, which allow cells to detect other nearby bacteria. They have also shown, in work done after this paper, that these reporting molecules can be linked to sensors for chemicals including arsenic.

When testing their sensors, the researchers deployed them in boxes so they would remain contained. The boxes were placed in fields, deserts, or on the roofs of buildings, and the cells produced signals that could be detected using hyperspectral cameras mounted on drones. The cameras take about 20 to 30 seconds to scan the field of view, and computer algorithms then analyze the signals to reveal whether the hyperspectral reporters are present.

In this paper, the researchers reported imaging from a maximum distance of 90 meters, but they are now working on extending those distances.

They envision that these sensors could be deployed for agricultural purposes such as sensing nitrogen or nutrient levels in soil. For those applications, the sensors could also be designed to work in plant cells. Detecting landmines is another potential application for this type of sensing.

Before being deployed, the sensors would need to undergo regulatory approval by the U.S. Environmental Protection Agency, as well as the U.S. Department of Agriculture if used for agriculture. Voigt and Chemla have been working with both agencies, the scientific community, and other stakeholders to determine what kinds of questions need to be answered before these technologies could be approved.

“We’ve been very busy in the past three years working to understand what are the regulatory landscapes and what are the safety concerns, what are the risks, what are the benefits of this kind of technology?” Chemla says.

The research was funded by the U.S. Department of Defense; the Army Research Office, a directorate of the U.S. Army Combat Capabilities Development Command Army Research Laboratory (the funding supported engineering of environmental strains and optimization of genetically-encoded sensors and hyperspectral reporter biosynthetic pathways); and the Ministry of Defense of Israel.


New method efficiently safeguards sensitive AI training data

The approach maintains an AI model’s accuracy while ensuring attackers can’t extract secret information.


Data privacy comes with a cost. There are security techniques that protect sensitive user data, like customer addresses, from attackers who may attempt to extract them from AI models — but they often make those models less accurate.

MIT researchers recently developed a framework, based on a new privacy metric called PAC Privacy, that could maintain the performance of an AI model while ensuring sensitive data, such as medical images or financial records, remain safe from attackers. Now, they’ve taken this work a step further by making their technique more computationally efficient, improving the tradeoff between accuracy and privacy, and creating a formal template that can be used to privatize virtually any algorithm without needing access to that algorithm’s inner workings.

The team utilized their new version of PAC Privacy to privatize several classic algorithms for data analysis and machine-learning tasks.

They also demonstrated that more “stable” algorithms are easier to privatize with their method. A stable algorithm’s predictions remain consistent even when its training data are slightly modified. Greater stability helps an algorithm make more accurate predictions on previously unseen data.

The researchers say the increased efficiency of the new PAC Privacy framework, and the four-step template one can follow to implement it, would make the technique easier to deploy in real-world situations.

“We tend to consider robustness and privacy as unrelated to, or perhaps even in conflict with, constructing a high-performance algorithm. First, we make a working algorithm, then we make it robust, and then private. We’ve shown that is not always the right framing. If you make your algorithm perform better in a variety of settings, you can essentially get privacy for free,” says Mayuri Sridhar, an MIT graduate student and lead author of a paper on this privacy framework.

She is joined in the paper by Hanshen Xiao PhD ’24, who will start as an assistant professor at Purdue University in the fall; and senior author Srini Devadas, the Edwin Sibley Webster Professor of Electrical Engineering at MIT. The research will be presented at the IEEE Symposium on Security and Privacy.

Estimating noise

To protect sensitive data that were used to train an AI model, engineers often add noise, or generic randomness, to the model so it becomes harder for an adversary to guess the original training data. This noise reduces a model’s accuracy, so the less noise one can add, the better.

PAC Privacy automatically estimates the smallest amount of noise one needs to add to an algorithm to achieve a desired level of privacy.

The original PAC Privacy algorithm runs a user’s AI model many times on different samples of a dataset. It measures the variance as well as correlations among these many outputs and uses this information to estimate how much noise needs to be added to protect the data.

This new variant of PAC Privacy works the same way but does not need to represent the entire matrix of data correlations across the outputs; it just needs the output variances.

“Because the thing you are estimating is much, much smaller than the entire covariance matrix, you can do it much, much faster,” Sridhar explains. This means that one can scale up to much larger datasets.

Adding noise can hurt the utility of the results, and it is important to minimize utility loss. Due to computational cost, the original PAC Privacy algorithm was limited to adding isotropic noise, which is added uniformly in all directions. Because the new variant estimates anisotropic noise, which is tailored to specific characteristics of the training data, a user could add less overall noise to achieve the same level of privacy, boosting the accuracy of the privatized algorithm.

Privacy and stability

As she studied PAC Privacy, Sridhar hypothesized that more stable algorithms would be easier to privatize with this technique. She used the more efficient variant of PAC Privacy to test this theory on several classical algorithms.

Algorithms that are more stable have less variance in their outputs when their training data change slightly. PAC Privacy breaks a dataset into chunks, runs the algorithm on each chunk of data, and measures the variance among outputs. The greater the variance, the more noise must be added to privatize the algorithm.

Employing stability techniques to decrease the variance in an algorithm’s outputs would also reduce the amount of noise that needs to be added to privatize it, she explains.

“In the best cases, we can get these win-win scenarios,” she says.

The team showed that these privacy guarantees remained strong despite the algorithm they tested, and that the new variant of PAC Privacy required an order of magnitude fewer trials to estimate the noise. They also tested the method in attack simulations, demonstrating that its privacy guarantees could withstand state-of-the-art attacks.

“We want to explore how algorithms could be co-designed with PAC Privacy, so the algorithm is more stable, secure, and robust from the beginning,” Devadas says. The researchers also want to test their method with more complex algorithms and further explore the privacy-utility tradeoff.

“The question now is: When do these win-win situations happen, and how can we make them happen more often?” Sridhar says.

“I think the key advantage PAC Privacy has in this setting over other privacy definitions is that it is a black box — you don’t need to manually analyze each individual query to privatize the results. It can be done completely automatically. We are actively building a PAC-enabled database by extending existing SQL engines to support practical, automated, and efficient private data analytics,” says Xiangyao Yu, an assistant professor in the computer sciences department at the University of Wisconsin at Madison, who was not involved with this study.

This research is supported, in part, by Cisco Systems, Capital One, the U.S. Department of Defense, and a MathWorks Fellowship.