MIT News
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Startup aims to flush away the problem of icky toilet seats

Cleana, founded by a team including Richard Li SM ’24, has developed a self-lifting toilet seat to improve bathroom sanitation.


We’ve all had the unpleasant experience of walking into a bathroom to discover a messy toilet seat. Now, an MIT alumni-founded startup is working to flush away that problem forever.

Cleana, co-founded by CTO Richard Li SM '24, has developed an antibacterial, self-lifting toilet seat that promises a cleaner, more hygienic bathroom experience for all. Developing a new toilet seat is not quite as sexy as creating a fusion reactor, but Li believes in the importance of the company’s mission.

“A lot of people find it odd at first — a lot of our investors certainly did,” Li says. “This is meaningful to me and how I spent my time the past four years at MIT, and we now have the best solution available for solving this big problem.”

About 1,000 of Cleana’s seats have already been installed in schools, airports, gyms, and stadiums. Customers include Gillette Stadium, the YMCA, and even MIT, which has purchased several of the self-raising sensations for use on campus.

“Everyone who’s had to use a dirty toilet before knows how big a problem they are,” Li says. “Everyone is aware of it, but nobody has been able to address it in a simple, elegant way.”

Li’s foray into the toilet revolution began at the height of the Covid-19 pandemic, when he was a master’s student in MIT’s Department of Mechanical Engineering and germs were top-of-mind for everyone.

In 2020, Li connected with Cleana co-founders Kevin Tang, Max Pounanov, and Andy Chang, who were students at Boston University, and the quest to give the toilet seat autonomy began. Li began by prototyping devices in MIT’s Sidney-Pacific dormitory and MIT D-Lab, working with hand tools, heavy machinery, and 3D printers to test different designs.

There were a number of moments that tested the founders’ commitment to the toilet revolution. Li spent many nights — when public toilets weren’t in use — touring bathrooms around Boston and disassembling hundreds of seats to test the fit of Cleana’s product on different toilet bowls. In a testament to the importance of market research, the founders stood outside the bathroom of a local bowling alley with an installed unit, and attempted to interview users about their experience.

Early feedback, while perhaps awkward, was also encouraging: Cleana’s toilet seats were consistently reported as far cleaner and drier than their standard counterparts. In fact, a months-long study across several sites found that it prevented nearly 95 percent of common toilet seat messes in bathrooms where it was deployed.

“It wasn’t a pleasant experience, but it did get us the data we needed,” Li laughs.

Cleana’s smart seat looks a lot like a regular toilet seat with a special handle — but don’t let the standard design fool you; creating Cleana’s seat was a more complex challenge than it may first appear. The company couldn’t just use a set of springs to lift the seat up immediately (which could cause wiping interference, among other issues we won’t detail here). Cleana ultimately went through three major design pivots before settling on today’s product.

Cleana’s current seat is battery-free and in fact uses no electronics. It lifts mechanically after a predetermined amount of time, thus removing it from areas prone to common messes. Importantly, the seat detects when an individual is using the seat and utilizes a clever system to adjust when it should lift itself.

Cleana’s seat especially shines in public men’s and all-gender restrooms, where negligent behavior results in a considerable amount of splatter. The seat also incorporates antimicrobial agents to prevent the spread of germs, and its special handle spares users from having to touch the rest of the seat.

Customers have reported fewer toilet seat messes and less maintenance with Cleana’s seats.

“It saves the cleaning staff a lot of time,” Li says. “Sometimes, businesses had to send cleaning staff into their bathrooms multiple times a day to check on the toilet seat to make sure it’s clean. Now they’re finding that every time they go in, it’s already clean.”

The team is also creating a premium version of the seat geared toward the home, that automatically lowers the toilet seat and lid instead of lifting it. The product uses the same technology as its commercial seat, simply flipped around. The invention aims to end the age-old debate over lowering the toilet seat, while also protecting young children, pets, or dropped items from the risks of an open bowl.

“It’s funny developing a second product which is essentially the opposite of our first, but we’ve been absolutely blown away by the interest in it, especially amongst homeowners and developers,” said Li. “Several large plumbing companies with interest in the product have also conducted independent surveys, finding that more than half of consumers may adopt them in the coming years.”

Ultimately, Li wants to get to a point when he can go into any random gas station or restaurant, and, when nature calls, find his company’s smart seat waiting for him. That dream got real on a recent trip to the Roche Brothers in Watertown, Massachusetts, when Li was delighted to discover his sparkling seat in the restroom by chance.

But Li knows Cleana’s team still has a long way to go before the toilet revolution is complete. That’s why this past spring, when Li finally stood to collect his diploma at MIT’s Commencement, he wore not a sash around his neck, but a toilet seat.

Others kept their distance, but Li knew it was clean.


China-based emissions of three potent climate-warming greenhouse gases spiked in past decade

Two studies pinpoint their likely industrial sources and mitigation opportunities.


When it comes to heating up the planet, not all greenhouse gases are created equal. They vary widely in their global warming potential (GWP), a measure of how much infrared thermal radiation a greenhouse gas would absorb over a given time frame once it enters the atmosphere. For example, measured over a 100-year period, the GWP of methane is about 28 times that of carbon dioxide (CO2), and the GWPs of a class of greenhouse gases known as perfluorocarbons (PFCs) are thousands of times that of CO2. The lifespans in the atmosphere of different greenhouse gases also vary widely. Methane persists in the atmosphere for around 10 years; CO2 for over 100 years, and PFCs for up to tens of thousands of years.

Given the high GWPs and lifespans of PFCs, their emissions could pose a major roadblock to achieving the aspirational goal of the Paris Agreement on climate change — to limit the increase in global average surface temperature to 1.5 degrees Celsius above preindustrial levels. Now, two new studies based on atmospheric observations inside China and high-resolution atmospheric models show a rapid rise in Chinese emissions over the last decade (2011 to 2020 or 2021) of three PFCs: tetrafluoromethane (PFC-14) and hexafluoroethane (PFC-116) (results in PNAS), and perfluorocyclobutane (PFC-318) (results in Environmental Science & Technology).

Both studies find that Chinese emissions have played a dominant role in driving up global emission levels for all three PFCs.

The PNAS study identifies substantial PFC-14 and PFC-116 emission sources in the less-populated western regions of China from 2011 to 2021, likely due to the large amount of aluminum industry in these regions. The semiconductor industry also contributes to some of the emissions detected in the more economically developed eastern regions. These emissions are byproducts from aluminum smelting, or occur during the use of the two PFCs in the production of semiconductors and flat panel displays. During the observation period, emissions of both gases in China rose by 78 percent, accounting for most of the increase in global emissions of these gases.

The ES&T study finds that during 2011-20, a 70 percent increase in Chinese PFC-318 emissions (contributing more than half of the global emissions increase of this gas) — originated primarily in eastern China. The regions with high emissions of PFC-318 in China overlap with geographical areas densely populated with factories that produce polytetrafluoroethylene (PTFE, commonly used for nonstick cookware coatings), implying that PTFE factories are major sources of PFC-318 emissions in China. In these factories, PFC-318 is formed as a byproduct.

“Using atmospheric observations from multiple monitoring sites, we not only determined the magnitudes of PFC emissions, but also pinpointed the possible locations of their sources,” says Minde An, a postdoc at the MIT Center for Global Change Science (CGCS), and corresponding author of both studies. “Identifying the actual source industries contributing to these PFC emissions, and understanding the reasons for these largely byproduct emissions, can provide guidance for developing region- or industry-specific mitigation strategies.”

“These three PFCs are largely produced as unwanted byproducts during the manufacture of otherwise widely used industrial products,” says MIT professor of atmospheric sciences Ronald Prinn, director of both the MIT Joint Program on the Science and Policy of Global Change and CGCS, and a co-author of both studies. “Phasing out emissions of PFCs as early as possible is highly beneficial for achieving global climate mitigation targets and is likely achievable by recycling programs and targeted technological improvements in these industries.”

Findings in both studies were obtained, in part, from atmospheric observations collected from nine stations within a Chinese network, including one station from the Advanced Global Atmospheric Gases Experiment (AGAGE) network. For comparison, global total emissions were determined from five globally distributed, relatively unpolluted “background” AGAGE stations, as reported in the latest United Nations Environment Program and World Meteorological Organization Ozone Assessment report.


“The dance between autonomy and affinity creates morality”

Philosophy doctoral student Abe Mathew is both studying philosophy and questioning some of its deeply-held ideas.


MIT philosophy doctoral student Abe Mathew believes individual rights play an important role in protecting the autonomy we value. But he also thinks we risk serious dysfunction if we ignore the importance of supporting and helping others.

“We should also acknowledge another feature of our moral lives,” he says, “namely, our need for affinity or closeness with other human beings, and our continued reliance on them to live flourishing lives in the world.”

Philosophy can be an important tool in understanding how humans interact with one another, he says. “I study moral obligation and rights, how the two relate, and the role they have to play in how we relate to one another,” Mathew adds.

Mathew asks that we think of autonomy and affinity as opposing forces — an idea he attributes to MIT philosopher, professor, and mentor Kieran Setiya. Autonomy pushes people farther from us, and affinity pulls people closer, Mathew says.

“The dance between autonomy and affinity creates morality,” Mathew adds.

Mathew is investigating one of moral philosophy’s foundational ideas — that every obligation we owe to another person correlates to a right that they have against us. The “Correlativity Thesis” is widely taken for granted, he says.

“A common example that's used to motivate the Correlativity Thesis is a case of a promise,” Mathew explains. “If I promise to meet you for coffee at 11, then I have a moral obligation to meet you for coffee at 11, and you have a right to meet me at 11.” While Mathew believes this is how promising works, he doesn’t think the Correlativity Thesis is true across the board.

“There isn’t necessarily a one-to-one relationship between rights and obligations,” he says.

“We need folks’ help to do things”

Before coming to MIT, Mathew majored in philosophy and minored in ethics, law, and society as an undergraduate at the University of Toronto. Upon graduating in 2020, he was awarded the prestigious John Black Aird Scholarship, given each year to the university’s top undergraduate.

Now at MIT, Mathew says his research is based on the value of shared responsibility.

“We need folks’ help to do things,” he says.  

When we lose sight of moral values, our societal connections can fall away, he argues.

“Mutual cooperation makes our lives possible,” Mathew says.

His research suggests alternatives to the idea that rights demand obligations.

“Morality puts a certain kind of pressure on us to ‘pay it forward’ — it requires us to do for others what was once done for us,” Mathew says. “If we don’t, we’re making an exception of ourselves; in essence, we're saying, ‘I was worthy of that help from others, but no one else is worthy of being helped by me.’”

Mathew also values the notion of paying it forward because he’s seen its value in his life. “I’ve encountered so many people who’ve gone above and beyond that I owe them,” he says. 

A valuable social compact

Mathew has been extensively involved in “public philosophy.” For example, he’s organized public events at MIT, like the successful “Ask a Philosopher Anything” panel in the Stata Center lobby.

Mathew’s work leading the local chapter of Corrupt the Youth, a philosophy outreach program focused on bringing philosophy to high schools students from historically marginalized groups, is an extension of his belief in our shared responsibility for one another — of “paying it forward.”

“The reason I discovered philosophy was because of my instructors in college who not only introduced me to the subject, but also cultivated my enthusiasm for it and mentored me,” he says. “Our moral theorizing should take into account the kinds of creatures we are: vulnerable human beings who are constantly in need of each other to get by in the world.”

Morality, Mathew says, gives us a tool — the social practice of forgiving — through which we can coexist, repair relationships we damage, and lead our lives together.

Mathew wants moral philosophers to consider their ideas’ practical, real-world applications. His experiences derive, in part, from notions of moral responsibility. Those who’ve been given a lot, he believes, have a greater responsibility for others. These kinds of social systems can consistently be improved by paying good deeds forward, he says.

“Moral philosophy should help build a world that allows for our mutual benefit,” Mathew says.


Machine learning unlocks secrets to advanced alloys

An MIT team uses computer models to measure atomic patterns in metals, essential for designing custom materials for use in aerospace, biomedicine, electronics, and more.


The concept of short-range order (SRO) — the arrangement of atoms over small distances — in metallic alloys has been underexplored in materials science and engineering. But the past decade has seen renewed interest in quantifying it, since decoding SRO is a crucial step toward developing tailored high-performing alloys, such as stronger or heat-resistant materials.

Understanding how atoms arrange themselves is no easy task and must be verified using intensive lab experiments or computer simulations based on imperfect models. These hurdles have made it difficult to fully explore SRO in metallic alloys.

But Killian Sheriff and Yifan Cao, graduate students in MIT’s Department of Materials Science and Engineering (DMSE), are using machine learning to quantify, atom-by-atom, the complex chemical arrangements that make up SRO. Under the supervision of Assistant Professor Rodrigo Freitas, and with the help of Assistant Professor Tess Smidt in the Department of Electrical Engineering and Computer Science, their work was recently published in The Proceedings of the National Academy of Sciences.

Interest in understanding SRO is linked to the excitement around advanced materials called high-entropy alloys, whose complex compositions give them superior properties.

Typically, materials scientists develop alloys by using one element as a base and adding small quantities of other elements to enhance specific properties. The addition of chromium to nickel, for example, makes the resulting metal more resistant to corrosion.

Unlike most traditional alloys, high-entropy alloys have several elements, from three up to 20, in nearly equal proportions. This offers a vast design space. “It’s like you’re making a recipe with a lot more ingredients,” says Cao.

The goal is to use SRO as a “knob” to tailor material properties by mixing chemical elements in high-entropy alloys in unique ways. This approach has potential applications in industries such as aerospace, biomedicine, and electronics, driving the need to explore permutations and combinations of elements, Cao says.

Capturing short-range order

Short-range order refers to the tendency of atoms to form chemical arrangements with specific neighboring atoms. While a superficial look at an alloy’s elemental distribution might indicate that its constituent elements are randomly arranged, it is often not so. “Atoms have a preference for having specific neighboring atoms arranged in particular patterns,” Freitas says. “How often these patterns arise and how they are distributed in space is what defines SRO.”

Understanding SRO unlocks the keys to the kingdom of high-entropy materials. Unfortunately, not much is known about SRO in high-entropy alloys. “It’s like we’re trying to build a huge Lego model without knowing what’s the smallest piece of Lego that you can have,” says Sheriff.

Traditional methods for understanding SRO involve small computational models, or simulations with a limited number of atoms, providing an incomplete picture of complex material systems. “High-entropy materials are chemically complex — you can’t simulate them well with just a few atoms; you really need to go a few length scales above that to capture the material accurately,” Sheriff says. “Otherwise, it’s like trying to understand your family tree without knowing one of the parents.”

SRO has also been calculated by using basic mathematics, counting immediate neighbors for a few atoms and computing what that distribution might look like on average. Despite its popularity, the approach has limitations, as it offers an incomplete picture of SRO.

Fortunately, researchers are leveraging machine learning to overcome the shortcomings of traditional approaches for capturing and quantifying SRO.

Hyunseok Oh, assistant professor in the Department of Materials Science and Engineering at the University of Wisconsin at Madison and a former DMSE postdoc, is excited about investigating SRO more fully. Oh, who was not involved in this study, explores how to leverage alloy composition, processing methods, and their relationship to SRO to design better alloys. “The physics of alloys and the atomistic origin of their properties depend on short-range ordering, but the accurate calculation of short-range ordering has been almost impossible,” says Oh. 

A two-pronged machine learning solution

To study SRO using machine learning, it helps to picture the crystal structure in high-entropy alloys as a connect-the-dots game in an coloring book, Cao says.

“You need to know the rules for connecting the dots to see the pattern.” And you need to capture the atomic interactions with a simulation that is big enough to fit the entire pattern. 

First, understanding the rules meant reproducing the chemical bonds in high-entropy alloys. “There are small energy differences in chemical patterns that lead to differences in short-range order, and we didn’t have a good model to do that,” Freitas says. The model the team developed is the first building block in accurately quantifying SRO.

The second part of the challenge, ensuring that researchers get the whole picture, was more complex. High-entropy alloys can exhibit billions of chemical “motifs,” combinations of arrangements of atoms. Identifying these motifs from simulation data is difficult because they can appear in symmetrically equivalent forms — rotated, mirrored, or inverted. At first glance, they may look different but still contain the same chemical bonds.

The team solved this problem by employing 3D Euclidean neural networks. These advanced computational models allowed the researchers to identify chemical motifs from simulations of high-entropy materials with unprecedented detail, examining them atom-by-atom.

The final task was to quantify the SRO. Freitas used machine learning to evaluate the different chemical motifs and tag each with a number. When researchers want to quantify the SRO for a new material, they run it by the model, which sorts it in its database and spits out an answer.

The team also invested additional effort in making their motif identification framework more accessible. “We have this sheet of all possible permutations of [SRO] already set up, and we know what number each of them got through this machine learning process,” Freitas says. “So later, as we run into simulations, we can sort them out to tell us what that new SRO will look like.” The neural network easily recognizes symmetry operations and tags equivalent structures with the same number.

“If you had to compile all the symmetries yourself, it’s a lot of work. Machine learning organized this for us really quickly and in a way that was cheap enough that we could apply it in practice,” Freitas says.

Enter the world’s fastest supercomputer

This summer, Cao and Sheriff and team will have a chance to explore how SRO can change under routine metal processing conditions, like casting and cold-rolling, through the U.S. Department of Energy’s INCITE program, which allows access to Frontier, the world’s fastest supercomputer.

“If you want to know how short-range order changes during the actual manufacturing of metals, you need to have a very good model and a very large simulation,” Freitas says. The team already has a strong model; it will now leverage INCITE’s computing facilities for the robust simulations required.

“With that we expect to uncover the sort of mechanisms that metallurgists could employ to engineer alloys with pre-determined SRO,” Freitas adds.

Sheriff is excited about the research’s many promises. One is the 3D information that can be obtained about chemical SRO. Whereas traditional transmission electron microscopes and other methods are limited to two-dimensional data, physical simulations can fill in the dots and give full access to 3D information, Sheriff says.

“We have introduced a framework to start talking about chemical complexity,” Sheriff explains. “Now that we can understand this, there’s a whole body of materials science on classical alloys to develop predictive tools for high-entropy materials.”

That could lead to the purposeful design of new classes of materials instead of simply shooting in the dark.

The research was funded by the MathWorks Ignition Fund, MathWorks Engineering Fellowship Fund, and the Portuguese Foundation for International Cooperation in Science, Technology and Higher Education in the MIT–Portugal Program.


Math program promotes global community for at-risk Ukrainian high schoolers

“Our hope is that our students grow and mature as scholars and help rebuild the intellectual potential of Ukraine after the devastating war.”


When Sophia Breslavets first heard about Yulia’s Dream, the MIT Department of Mathematics’ Program for Research in Mathematics, Engineering, and Science (PRIMES) for Ukrainian students, Russia had just invaded her country, and she and her family lived in a town 20 miles from the Russian border.

Breslavets had attended a school that emphasized mathematics and physics, took math classes on weekends and during summer breaks, and competed in math Olympiads. “Math was really present in our lives,” she says. 

But the war shifted her studies to online. “It still wasn’t like a fully functioning online school,” she recalls. “You can’t socialize.”

So she was grateful to be accepted to the MIT program in 2022. “Yulia’s Dream was a great thing to happen to me personally, because in the beginning, when the war was just starting, I didn't know what to do. This was just a great thing to take your mind off of what's going on outside your window, and you can just kind of get yourself into that and know that you have some work to do, and that was huge.”

Second time around

Breslavets just finished up her second year in the online enrichment program, which offers small-group math instruction in their native language and in English to Ukrainian high schoolers by mentors from around the world. Students wrap up the program by presenting their papers at a conference; several of those papers are published on arXiv.org. This year’s conference featured a guest talk by Professor Pavlo Pylyavskyy of the University of Minnesota Twin Cities, who discussed “Incidences and Tilings,” a joint work with Professor Sergey Fomin of the University of Michigan.

The PRIMES program first organized Yulia’s Dream in 2022, named in memory of Yulia Zdanovska, a talented mathematician and computer scientist who was a teacher with Teach for Ukraine. She was 21 when she was killed in 2022 during Russian shelling in her home city of Kharkiv.

The program fulfills one of PRIMES’s goals, to expose students to the world community of research mathematics by connecting them with early-career mentors. Students must solve a challenging entrance problem set and are then referred by Ukrainian math teachers and leaders at math competitions and math camps.

Yulia’s Dream is coordinated by Dmytro Matvieievskyi, a postdoc at the Kavli Institute in Tokyo, who graduated from School #27 of Kharkiv, and is a recipient of the Bronze medal at the 2012 International Math Olympiad (IMO) as part of the Ukraine Team.

In its first year, from 2022 to 2023, the program drew 48 students in Phase I (reading) and 33 students in Phase II (reading and research). “Our expectation for 2022-23 was that each of six research groups would produce a research paper, and they all did, and one group continued working and produced an extra paper a few months after, for a total of seven papers. Three papers are now on arXiv.org, which is a mark of quality. This went beyond our expectations.”

This past year, the program provided guided reading and research supervision to 32 students. “We conduct thorough selection and provide opportunities to all Ukrainian students capable of doing advanced reading and/or research at the requisite level,” says PRIMES’s director Slava Gerovitch PhD ’99.

MIT pipeline

Several students participated in both years, and at least two have been accepted to MIT.

One of those students is two-time Yulia’s Dream participant Nazar Korniichuk, who had attended a high school in Kyiv that specialized in mathematics and physics when his education was disrupted by the war. 

“I was confused and did not know which way I should go,” he recalls. “But then I saw the program Yulia's Dream, and the desire to try real mathematical research ignited.”

In his first year in the program, participation was a challenge. “On the one hand, it was very difficult, because in certain periods there was no electricity and no water. There was always stress and uncertainty about tomorrow. But on the other hand, because there was a war, it motivated me to do mathematics even more, especially during periods when there was no electricity or water.”

He did complete his paper, with Kostiantyn Molokanov and Severyn Khomych, and with mentor Darij Grinberg PhD ’16, a professor of mathematics at Drexel University: “The Pak–Postnikov and Naruse skew hook length formulas: A new proof” (2 Oct 2023; arXiv.org, 27 Oct 2023).

Korniichuk completed his second round from his new home in Newton, Massachusetts, to which his family had migrated last summer. At the recent conference, he presented his paper, with co-authors Kostiantyn Molokanov and Severyn Khomych, “Affine root systems via Lyndon words,” that they worked on with mentor Professor Oleksandr Tsymbaliuk of Purdue University.

“Yulia’s Dream was a very unique experience for me,” says Korniichuk, who plans to study math and computer science at MIT. “I had the opportunity to work on a difficult topic for a long time and then take part in writing an article. Although these years have been difficult, this program encouraged me to go forward.”

Real research

What makes the program work is providing a university level of instruction in mathematics research, to prepare high school students for top mathematics programs. In this case, it provides Ukrainian students an alternative route to reach their educational goals.

The core philosophy of the Yulia’s Dream experience is to provide “the best possible approximation to real mathematical research,” math professor and PRIMES chief research advisor Pavel Etingof told attendees at the 2024 conference. Etingof was born in Ukraine.

“In particular, all projects have to be real — i.e., of interest to professional research mathematicians — and the reading groups should be a bridge towards real mathematics as well. Also, the time frame of Yulia’s Dream is closer to that of real mathematical research than it is in any other high school research program: the students work on their projects for a whole year!”

Other principles include an emphasis on writing and collaboration, with students working on teams with undergraduates, graduate students, postdocs, and faculty. There is also an emphasis on computer-assisted math, which “not only allows participation of high school students as equal members of our research teams, but also allows them to grasp abstract mathematical notions more easily,” says Pavel. “If such notions (such as group, ring, module, etc.) have an incarnation in the familiar digital world, they are less scary.”

Breslavets says that she especially appreciates the collaboration part of the program. Now 16, Breslavets just finished her second year with Yulia’s Dream, and with Andrii Smutchak presented “Double groupoids,” as mentored by University of Alberta professor Harshit Yadav. She says that they began working on the paper in October, and it took about three months to write. 

This year’s session was easier for her to participate in, because in summer 2022, her parents found her a host family in Connecticut so that she could transfer to St. Bernard’s School. Even with her new school’s great curriculum, she is grateful for the Yulia’s Dream program.

“Our high school program is considered to be advanced, and we have a class that’s called math research, but it’s definitely not the same, because [with Yulia’s Dream] you're working with people who actually do that for a living,” she says. “I learned a lot from both of my mentors. It’s so collaborative. They can give you feedback, and they can be honest about it.”  

She says she misses her Ukrainian math community, which drifted apart after the Covid-19 pandemic and because of the war, but reports finding a new one with Yulia’s Dream. “I actually met a lot of new people,” she says.

Group collaboration is a huge goal for PRIMES director Slava Gerovitch.

“Yulia’s Dream reflects the international nature of the mathematical community, with the mentors coming from different countries and working together with the students to advance knowledge for the whole of humanity. Our hope is that our students grow and mature as scholars and help rebuild the intellectual potential of Ukraine after the devastating war,” says Gerovitch.

Applications for next year’s program are now open. Math graduate students and postdocs are also invited to apply to be a mentor. Weekly meetings begin in October, and culminate in a June 2025 conference to present papers.


Creating and verifying stable AI-controlled systems in a rigorous and flexible way

Neural network controllers provide complex robots with stability guarantees, paving the way for the safer deployment of autonomous vehicles and industrial machines.


Neural networks have made a seismic impact on how engineers design controllers for robots, catalyzing more adaptive and efficient machines. Still, these brain-like machine-learning systems are a double-edged sword: Their complexity makes them powerful, but it also makes it difficult to guarantee that a robot powered by a neural network will safely accomplish its task.

The traditional way to verify safety and stability is through techniques called Lyapunov functions. If you can find a Lyapunov function whose value consistently decreases, then you can know that unsafe or unstable situations associated with higher values will never happen. For robots controlled by neural networks, though, prior approaches for verifying Lyapunov conditions didn’t scale well to complex machines.

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and elsewhere have now developed new techniques that rigorously certify Lyapunov calculations in more elaborate systems. Their algorithm efficiently searches for and verifies a Lyapunov function, providing a stability guarantee for the system. This approach could potentially enable safer deployment of robots and autonomous vehicles, including aircraft and spacecraft.

To outperform previous algorithms, the researchers found a frugal shortcut to the training and verification process. They generated cheaper counterexamples — for example, adversarial data from sensors that could’ve thrown off the controller — and then optimized the robotic system to account for them. Understanding these edge cases helped machines learn how to handle challenging circumstances, which enabled them to operate safely in a wider range of conditions than previously possible. Then, they developed a novel verification formulation that enables the use of a scalable neural network verifier, α,β-CROWN, to provide rigorous worst-case scenario guarantees beyond the counterexamples.

“We’ve seen some impressive empirical performances in AI-controlled machines like humanoids and robotic dogs, but these AI controllers lack the formal guarantees that are crucial for safety-critical systems,” says Lujie Yang, MIT electrical engineering and computer science (EECS) PhD student and CSAIL affiliate who is a co-lead author of a new paper on the project alongside Toyota Research Institute researcher Hongkai Dai SM ’12, PhD ’16. “Our work bridges the gap between that level of performance from neural network controllers and the safety guarantees needed to deploy more complex neural network controllers in the real world,” notes Yang.

For a digital demonstration, the team simulated how a quadrotor drone with lidar sensors would stabilize in a two-dimensional environment. Their algorithm successfully guided the drone to a stable hover position, using only the limited environmental information provided by the lidar sensors. In two other experiments, their approach enabled the stable operation of two simulated robotic systems over a wider range of conditions: an inverted pendulum and a path-tracking vehicle. These experiments, though modest, are relatively more complex than what the neural network verification community could have done before, especially because they included sensor models.

“Unlike common machine learning problems, the rigorous use of neural networks as Lyapunov functions requires solving hard global optimization problems, and thus scalability is the key bottleneck,” says Sicun Gao, associate professor of computer science and engineering at the University of California at San Diego, who wasn’t involved in this work. “The current work makes an important contribution by developing algorithmic approaches that are much better tailored to the particular use of neural networks as Lyapunov functions in control problems. It achieves impressive improvement in scalability and the quality of solutions over existing approaches. The work opens up exciting directions for further development of optimization algorithms for neural Lyapunov methods and the rigorous use of deep learning in control and robotics in general.”

Yang and her colleagues’ stability approach has potential wide-ranging applications where guaranteeing safety is crucial. It could help ensure a smoother ride for autonomous vehicles, like aircraft and spacecraft. Likewise, a drone delivering items or mapping out different terrains could benefit from such safety guarantees.

The techniques developed here are very general and aren’t just specific to robotics; the same techniques could potentially assist with other applications, such as biomedicine and industrial processing, in the future.

While the technique is an upgrade from prior works in terms of scalability, the researchers are exploring how it can perform better in systems with higher dimensions. They’d also like to account for data beyond lidar readings, like images and point clouds.

As a future research direction, the team would like to provide the same stability guarantees for systems that are in uncertain environments and subject to disturbances. For instance, if a drone faces a strong gust of wind, Yang and her colleagues want to ensure it’ll still fly steadily and complete the desired task. 

Also, they intend to apply their method to optimization problems, where the goal would be to minimize the time and distance a robot needs to complete a task while remaining steady. They plan to extend their technique to humanoids and other real-world machines, where a robot needs to stay stable while making contact with its surroundings.

Russ Tedrake, the Toyota Professor of EECS, Aeronautics and Astronautics, and Mechanical Engineering at MIT, vice president of robotics research at TRI, and CSAIL member, is a senior author of this research. The paper also credits University of California at Los Angeles PhD student Zhouxing Shi and associate professor Cho-Jui Hsieh, as well as University of Illinois Urbana-Champaign assistant professor Huan Zhang. Their work was supported, in part, by Amazon, the National Science Foundation, the Office of Naval Research, and the AI2050 program at Schmidt Sciences. The researchers’ paper will be presented at the 2024 International Conference on Machine Learning.


Collaborative effort supports an MIT resilient to the impacts of extreme heat

Increasing severity and duration of heat drives data collection and resiliency planning for the forthcoming Climate Resiliency and Adaptation Roadmap.


Warmer weather can be a welcome change for many across the MIT community. But as climate impacts intensify, warm days are often becoming hot days with increased severity and frequency. Already this summer, heat waves in June and July brought daily highs of over 90 degrees Fahrenheit. According to the Resilient Cambridge report published in 2021, from the 1970s to 2000, data from the Boston Logan International Airport weather station reported an average of 10 days of 90-plus temperatures each year. Now, simulations are predicting that, in the current time frame of 2015-44, the number of days above 90 F could be triple the 1970-2000 average. 

While the increasing heat is all but certain, how institutions like MIT will be affected and how they respond continues to evolve. “We know what the science is showing, but how will this heat impact the ability of MIT to fulfill its mission and support its community?” asks Brian Goldberg, assistant director of the MIT Office of Sustainability. “What will be the real feel of these temperatures on campus?” These questions and more are guiding staff, researchers, faculty, and students working collaboratively to understand these impacts to MIT and inform decisions and action plans in response.

This work is part of developing MIT’s forthcoming Climate Resiliency and Adaptation Roadmap, which is called for in MIT’s climate action plan, and is co-led by Goldberg; Laura Tenny, senior campus planner; and William Colehower, senior advisor to the vice president for campus services and stewardship. This effort is also supported by researchers in the departments of Urban Studies and Planning, Architecture, and Electrical Engineering and Computer Science (EECS), in the Urban Risk Lab and the Senseable City Lab, as well as by staff in MIT Emergency Management and Housing and Residential Services. The roadmap — which builds upon years of resiliency planning and research at MIT — will include an assessment of current and future conditions on campus as well as strategies and proposed interventions to support MIT’s community and campus in the face of increasing climate impacts.

A key piece of the resiliency puzzle

When the City of Cambridge released their Climate Change Vulnerability Assessment in 2015, the report identified flooding and heat as primary resiliency risks to the city. In response, Institute staff worked together with the city to create a full picture of potential flood risks to both Cambridge and the campus, with the latter becoming the MIT Climate Resiliency Dashboard. The dashboard, published in the MIT Sustainability DataPool, has played an important role in campus planning and resiliency efforts since its debut in 2021, but heat has been a missing piece of the tool. This is largely because for heat, unlike flooding, few data exist relative to building-level impacts. The original assessment from Cambridge showed a model of temperature averages that could be expected in portions of the city, but understanding the measured heat impacts down to the building level is essential because impacts of heat can vary so greatly. “Heat also doesn’t conform to topography like flooding, making it harder to map it with localized specificity,” notes Tenny. “Microclimates, humidity levels, shade or sun aspect, and other factors contribute to heat risk.”

Collection efforts have been underway for the past three years to fill in this gap in baseline data. Members of the Climate and Resiliency Adaptation Roadmap team and partners have helped build and place heat sensors to record and analyze data. The current heat sensors, which are shoebox-shaped devices on tripods, can be found at multiple outdoor locations on campus during the summer, capturing and recording temperatures multiple times each hour. “Urban environmental phenomena are hyperlocal. While National Weather Service readouts at locations like Logan Airport are extremely valuable, this gives us a more high-resolution understanding of the urban microclimate on our campus,” notes Sanjana Paul, past technical associate with Senseable City and current graduate student in the Department of Urban Studies and Planning who helps oversee data collection and analysis.

After collection, temperature data are analyzed and mapped. The data will soon be published in the updated Climate Resiliency Dashboard and will help inform actions through the Climate Resiliency and Adaptation Roadmap, but in the meantime, the information has already provided some important insights. “There were some parts of campus that were much hotter than I expected,” explains Paul. “Some of the temperature readings across campus were regularly going over 100 degrees during heat waves. It’s a bit surprising to see three digits on a temperature reading in Cambridge.” Some strategies are also already being put into action, including planting more trees to support the urban campus forest and launching cooling locations around campus to open during days of extreme heat.

As data gathering enters its fourth summer, partners continue to expand. Senseable City first began capturing data in 2021 using sensors placed on MIT Recycling trucks, and the Urban Risk Lab has offered community-centered temperature data collection with the help of its director and associate professor of architecture, Miho Mazereeuw. More recently, students in course 6.900 (Engineering for Impact) worked to design heat sensors to aid in the data collection and grow the fleet of sensors on campus. Co-instructed by EECS senior lecturer Joe Steinmeyer and EECS professor Joel Voldman, students in the course were tasked with developing technology to solve challenges close at hand. “One of the goals of the class is to tackle real-world problems so students emerge with confidence as an engineer,” explains Voldman. “Having them work on a challenge that is outside their comfort zone and impacts them really helps to engage and inspire them.” 

Centering on people

While the temperature data offer one piece of the resiliency planning puzzle, knowing how these temperatures will affect community members is another. “When we look at impacts to our campus from heat, people are the focus,” explains Goldberg. “While stress on campus infrastructure is one factor we are evaluating, our primary focus is the vulnerability of people to extreme heat.” Impacts to community members can range from disrupted nights of sleep to heat-related illnesses.

As the team looked at the data and spoke with individuals across campus, it became clear that some community members might be more vulnerable than others to the impact of extreme heat days, including ground, janitorial, and maintenance crews who work outside; kitchen staff who work close to hot equipment; and student athletes exerting themselves on hot days. “We know that people on our campus are already experiencing these extreme heat days differently,” explains Susy Jones, senior sustainability project manager in the Office of Sustainability who focuses on environmental and climate justice. “We need to design strategies and augment existing interventions with equity in mind, ensuring everyone on campus can fulfill their role at MIT.”

To support those strategy decisions, the resiliency team is seeking additional input from the MIT community. One hoped-for outcome of the roadmap and dashboard is for community members to review them and offer their own insight and experiences of heat conditions on campus. “These plans need to work at the campus level and the individual,” says Goldberg. “The data tells an important story, but individuals help us complete the picture.”

A model for others

As the dashboard update nears completion and the broader resiliency and adaptation roadmap of strategies launches, their purpose is twofold: help MIT develop and inform plans and procedures for mitigating and addressing heat on campus, and serve as a model for other universities and communities grappling with the same challenges. “This approach is the center of how we operate at MIT,” explains Director of Sustainability Julie Newman. “We seek to identify solutions for our own campus in a manner that others can learn from and potentially adapt for their own resiliency and climate planning purposes. We’re also looking to align with efforts at the city and state level.” By publishing the roadmap broadly, universities and municipalities can apply lessons and processes to their own spaces.

When the updated Climate Resiliency Dashboard and Climate Resiliency and Adaptation Roadmap go live, it will mark the beginning of the next phase of work, rather than an end. “The dashboard is designed to present these impacts in a way everyone can understand so people across campus can respond and help us understand what is needed for them to continue to fulfill their role at MIT,” says Goldberg. Uncertainty plays a big role in resiliency planning, and the dashboard will reflect that. “This work is not something you ever say is done,” says Goldberg. “As information and data evolves, so does our work.” 


Astronomers spot a highly “eccentric” planet on its way to becoming a hot Jupiter

The planet’s wild orbit offers clues to how such large, hot planets take shape.


Hot Jupiters are some of the most extreme planets in the galaxy. These scorching worlds are as massive as Jupiter, and they swing wildly close to their star, whirling around in a few days compared to our own gas giant’s leisurely 4,000-day orbit around the sun.

Scientists suspect, though, that hot Jupiters weren’t always so hot and in fact may have formed as “cold Jupiters,” in more frigid, distant environs. But how they evolved to be the star-hugging gas giants that astronomers observe today is a big unknown.

Now, astronomers at MIT, Penn State University, and elsewhere have discovered a hot Jupiter “progenitor” — a sort of juvenile planet that is in the midst of becoming a hot Jupiter. And its orbit is providing some answers to how hot Jupiters evolve.

The new planet, which astronomers labeled TIC 241249530 b, orbits a star that is about 1,100 light-years from Earth. The planet circles its star in a highly “eccentric” orbit, meaning that it comes extremely close to the star before slinging far out, then doubling back, in a narrow, elliptical circuit. If the planet was part of our solar system, it would come 10 times closer to the sun than Mercury, before hurtling out, just past Earth, then back around. By the scientists’ estimates, the planet’s stretched-out orbit has the highest eccentricity of any planet detected to date.

The new planet’s orbit is also unique in its “retrograde” orientation. Unlike the Earth and other planets in the solar system, which orbit in the same direction as the sun spins, the new planet travels in a direction that is counter to its star’s rotation.

The team ran simulations of orbital dynamics and found that the planet’s highly eccentric and retrograde orbit are signs that it is likely evolving into a hot Jupiter, through “high-eccentricity migration” — a process by which a planet’s orbit wobbles and progressively shrinks as it interacts with another star or planet on a much wider orbit.

In the case of TIC 241249530 b, the researchers determined that the planet orbits around a primary star that itself orbits around a secondary star, as part of a stellar binary system. The interactions between the two orbits — of the planet and its star — have caused the planet to gradually migrate closer to its star over time.

The planet’s orbit is currently elliptical in shape, and the planet takes about 167 days to complete a lap around its star. The researchers predict that in 1 billion years, the planet will migrate into a much tighter, circular orbit, when it will then circle its star every few days. At that point, the planet will have fully evolved into a hot Jupiter.

“This new planet supports the theory that high eccentricity migration should account for some fraction of hot Jupiters,” says Sarah Millholland, assistant professor of physics in MIT’s Kavli Institute for Astrophysics and Space Research. “We think that when this planet formed, it would have been a frigid world. And because of the dramatic orbital dynamics, it will become a hot Jupiter in about a billion years, with temperatures of several thousand kelvin. So it’s a huge shift from where it started.”

Millholland and her colleagues have published their findings today in the journal Nature. Her co-authors are MIT undergraduate Haedam Im, lead author Arvind Gupta of Penn State University and NSF NOIRLab, and collaborators at multiple other universities, institutions, and observatories.

“Radical seasons”

The new planet was first spotted in data taken by NASA’s Transiting Exoplanet Survey Satellite (TESS), an MIT-led mission that monitors the brightness of nearby stars for “transits,” or brief dips in starlight that could signal the presence of a planet passing in front of, and temporarily blocking, a star’s light.

On Jan. 12, 2020, TESS picked up a possible transit of the star TIC 241249530. Gupta and his colleagues at Penn State determined that the transit was consistent with a Jupiter-sized planet crossing in front of the star. They then acquired measurements from other observatories of the star’s radial velocity, which estimates a star’s wobble, or the degree to which it moves back and forth, in response to other nearby objects that might gravitationally tug on the star.

Those measurements confirmed that a Jupiter-sized planet was orbiting the star and that its orbit was highly eccentric, bringing the planet extremely close to the star before flinging it far out.

Prior to this detection, astronomers had known of only one other planet, HD 80606 b, that was thought to be an early hot Jupiter. That planet, discovered in 2001, held the record for having the highest eccentricity, until now.

“This new planet experiences really dramatic changes in starlight throughout its orbit,” Millholland says. “There must be really radical seasons and an absolutely scorched atmosphere every time it passes close to the star.”

“Dance of orbits”

How could a planet have fallen into such an extreme orbit? And how might its eccentricity evolve over time? For answers, Im and Millholland ran simulations of planetary orbital dynamics to model how the planet may have evolved throughout its history and how it might carry on over hundreds of millions of years.

The team modeled the gravitational interactions between the planet, its star, and the second nearby star. Gupta and his colleagues had observed that the two stars orbit each other in a binary system, while the planet is simultaneously orbiting the closer star. The configuration of the two orbits is somewhat like a circus performer twirling a hula hoop around her waist, while spinning a second hula hoop around her wrist.

Millholland and Im ran multiple simulations, each with a different set of starting conditions, to see which condition, when run forward over several billions of years, produced the configuration of planetary and stellar orbits that Gupta’s team observed in the present day. They then ran the best match even further into the future to predict how the system will evolve over the next several billion years.

These simulations revealed that the new planet is likely in the midst of evolving into a hot Jupiter: Several billion years ago, the planet formed as a cold Jupiter, far from its star, in a region cold enough to condense and take shape. Newly formed, the planet likely orbited the star in a circular path. This conventional orbit, however, gradually stretched and grew eccentric, as it experienced gravitational forces from the star’s misaligned orbit with its second, binary star.

“It’s a pretty extreme process in that the changes to the planet’s orbit are massive,” Millholland says. “It’s a big dance of orbits that’s happening over billions of years, and the planet’s just going along for the ride.”

In another billion years, the simulations show that the planet’s orbit will stabilize in a close-in, circular path around its star.

“Then, the planet will fully become a hot Jupiter,” Millholland says.

The team’s observations, along with their simulations of the planet’s evolution, support the theory that hot Jupiters can form through high eccentricity migration, a process by which a planet gradually moves into place via extreme changes to its orbit over time.

“It’s clear not only from this, but other statistical studies too, that high eccentricity migration should account for some fraction of hot Jupiters,” Millholland notes. “This system highlights how incredibly diverse exoplanets can be. They are mysterious other worlds that can have wild orbits that tell a story of how they got that way and where they’re going. For this planet, it’s not quite finished its journey yet.”

“It is really hard to catch these hot Jupiter progenitors ‘in the act’ as they undergo their super eccentric episodes, so it is very exciting to find a system that undergoes this process,” says Smadar Naoz, a professor of physics and astronomy at the University of California at Los Angeles, who was not involved with the study. “I believe that this discovery opens the door to a deeper understanding of the birth configuration of the exoplanetary system.”


AI method radically speeds predictions of materials’ thermal properties

The approach could help engineers design more efficient energy-conversion systems and faster microelectronic devices, reducing waste heat.


It is estimated that about 70 percent of the energy generated worldwide ends up as waste heat.

If scientists could better predict how heat moves through semiconductors and insulators, they could design more efficient power generation systems. However, the thermal properties of materials can be exceedingly difficult to model.

The trouble comes from phonons, which are subatomic particles that carry heat. Some of a material’s thermal properties depend on a measurement called the phonon dispersion relation, which can be incredibly hard to obtain, let alone utilize in the design of a system.

A team of researchers from MIT and elsewhere tackled this challenge by rethinking the problem from the ground up. The result of their work is a new machine-learning framework that can predict phonon dispersion relations up to 1,000 times faster than other AI-based techniques, with comparable or even better accuracy. Compared to more traditional, non-AI-based approaches, it could be 1 million times faster.

This method could help engineers design energy generation systems that produce more power, more efficiently. It could also be used to develop more efficient microelectronics, since managing heat remains a major bottleneck to speeding up electronics.

“Phonons are the culprit for the thermal loss, yet obtaining their properties is notoriously challenging, either computationally or experimentally,” says Mingda Li, associate professor of nuclear science and engineering and senior author of a paper on this technique.

Li is joined on the paper by co-lead authors Ryotaro Okabe, a chemistry graduate student; and Abhijatmedhi Chotrattanapituk, an electrical engineering and computer science graduate student; Tommi Jaakkola, the Thomas Siebel Professor of Electrical Engineering and Computer Science at MIT; as well as others at MIT, Argonne National Laboratory, Harvard University, the University of South Carolina, Emory University, the University of California at Santa Barbara, and Oak Ridge National Laboratory. The research appears in Nature Computational Science.

Predicting phonons

Heat-carrying phonons are tricky to predict because they have an extremely wide frequency range, and the particles interact and travel at different speeds.

A material’s phonon dispersion relation is the relationship between energy and momentum of phonons in its crystal structure. For years, researchers have tried to predict phonon dispersion relations using machine learning, but there are so many high-precision calculations involved that models get bogged down.

“If you have 100 CPUs and a few weeks, you could probably calculate the phonon dispersion relation for one material. The whole community really wants a more efficient way to do this,” says Okabe.

The machine-learning models scientists often use for these calculations are known as graph neural networks (GNN). A GNN converts a material’s atomic structure into a crystal graph comprising multiple nodes, which represent atoms, connected by edges, which represent the interatomic bonding between atoms.

While GNNs work well for calculating many quantities, like magnetization or electrical polarization, they are not flexible enough to efficiently predict an extremely high-dimensional quantity like the phonon dispersion relation. Because phonons can travel around atoms on X, Y, and Z axes, their momentum space is hard to model with a fixed graph structure.

To gain the flexibility they needed, Li and his collaborators devised virtual nodes.

They create what they call a virtual node graph neural network (VGNN) by adding a series of flexible virtual nodes to the fixed crystal structure to represent phonons. The virtual nodes enable the output of the neural network to vary in size, so it is not restricted by the fixed crystal structure.

Virtual nodes are connected to the graph in such a way that they can only receive messages from real nodes. While virtual nodes will be updated as the model updates real nodes during computation, they do not affect the accuracy of the model.

“The way we do this is very efficient in coding. You just generate a few more nodes in your GNN. The physical location doesn’t matter, and the real nodes don’t even know the virtual nodes are there,” says Chotrattanapituk.

Cutting out complexity

Since it has virtual nodes to represent phonons, the VGNN can skip many complex calculations when estimating phonon dispersion relations, which makes the method more efficient than a standard GNN. 

The researchers proposed three different versions of VGNNs with increasing complexity. Each can be used to predict phonons directly from a material’s atomic coordinates.

Because their approach has the flexibility to rapidly model high-dimensional properties, they can use it to estimate phonon dispersion relations in alloy systems. These complex combinations of metals and nonmetals are especially challenging for traditional approaches to model.

The researchers also found that VGNNs offered slightly greater accuracy when predicting a material’s heat capacity. In some instances, prediction errors were two orders of magnitude lower with their technique.

A VGNN could be used to calculate phonon dispersion relations for a few thousand materials in just a few seconds with a personal computer, Li says.

This efficiency could enable scientists to search a larger space when seeking materials with certain thermal properties, such as superior thermal storage, energy conversion, or superconductivity.

Moreover, the virtual node technique is not exclusive to phonons, and could also be used to predict challenging optical and magnetic properties.

In the future, the researchers want to refine the technique so virtual nodes have greater sensitivity to capture small changes that can affect phonon structure.

“Researchers got too comfortable using graph nodes to represent atoms, but we can rethink that. Graph nodes can be anything. And virtual nodes are a very generic approach you could use to predict a lot of high-dimensional quantities,” Li says.

“The authors’ innovative approach significantly augments the graph neural network description of solids by incorporating key physics-informed elements through virtual nodes, for instance, informing wave-vector dependent band-structures and dynamical matrices,” says Olivier Delaire, associate professor in the Thomas Lord Department of Mechanical Engineering and Materials Science at Duke University, who was not involved with this work. “I find that the level of acceleration in predicting complex phonon properties is amazing, several orders of magnitude faster than a state-of-the-art universal machine-learning interatomic potential. Impressively, the advanced neural net captures fine features and obeys physical rules. There is great potential to expand the model to describe other important material properties: Electronic, optical, and magnetic spectra and band structures come to mind.”

This work is supported by the U.S. Department of Energy, National Science Foundation, a Mathworks Fellowship, a Sow-Hsin Chen Fellowship, the Harvard Quantum Initiative, and the Oak Ridge National Laboratory.


How to assess a general-purpose AI model’s reliability before it’s deployed

A new technique enables users to compare several large models and choose the one that works best for their task.


Foundation models are massive deep-learning models that have been pretrained on an enormous amount of general-purpose, unlabeled data. They can be applied to a variety of tasks, like generating images or answering customer questions.

But these models, which serve as the backbone for powerful artificial intelligence tools like ChatGPT and DALL-E, can offer up incorrect or misleading information. In a safety-critical situation, such as a pedestrian approaching a self-driving car, these mistakes could have serious consequences.

To help prevent such mistakes, researchers from MIT and the MIT-IBM Watson AI Lab developed a technique to estimate the reliability of foundation models before they are deployed to a specific task.

They do this by considering a set of foundation models that are slightly different from one another. Then they use their algorithm to assess the consistency of the representations each model learns about the same test data point. If the representations are consistent, it means the model is reliable.

When they compared their technique to state-of-the-art baseline methods, it was better at capturing the reliability of foundation models on a variety of downstream classification tasks.

Someone could use this technique to decide if a model should be applied in a certain setting, without the need to test it on a real-world dataset. This could be especially useful when datasets may not be accessible due to privacy concerns, like in health care settings. In addition, the technique could be used to rank models based on reliability scores, enabling a user to select the best one for their task.

“All models can be wrong, but models that know when they are wrong are more useful. The problem of quantifying uncertainty or reliability is more challenging for these foundation models because their abstract representations are difficult to compare. Our method allows one to quantify how reliable a representation model is for any given input data,” says senior author Navid Azizan, the Esther and Harold E. Edgerton Assistant Professor in the MIT Department of Mechanical Engineering and the Institute for Data, Systems, and Society (IDSS), and a member of the Laboratory for Information and Decision Systems (LIDS).

He is joined on a paper about the work by lead author Young-Jin Park, a LIDS graduate student; Hao Wang, a research scientist at the MIT-IBM Watson AI Lab; and Shervin Ardeshir, a senior research scientist at Netflix. The paper will be presented at the Conference on Uncertainty in Artificial Intelligence.

Measuring consensus

Traditional machine-learning models are trained to perform a specific task. These models typically make a concrete prediction based on an input. For instance, the model might tell you whether a certain image contains a cat or a dog. In this case, assessing reliability could be a matter of looking at the final prediction to see if the model is right.

But foundation models are different. The model is pretrained using general data, in a setting where its creators don’t know all downstream tasks it will be applied to. Users adapt it to their specific tasks after it has already been trained.

Unlike traditional machine-learning models, foundation models don’t give concrete outputs like “cat” or “dog” labels. Instead, they generate an abstract representation based on an input data point.

To assess the reliability of a foundation model, the researchers used an ensemble approach by training several models which share many properties but are slightly different from one another.

“Our idea is like measuring the consensus. If all those foundation models are giving consistent representations for any data in our dataset, then we can say this model is reliable,” Park says.

But they ran into a problem: How could they compare abstract representations?

“These models just output a vector, comprised of some numbers, so we can’t compare them easily,” he adds.

They solved this problem using an idea called neighborhood consistency.

For their approach, the researchers prepare a set of reliable reference points to test on the ensemble of models. Then, for each model, they investigate the reference points located near that model’s representation of the test point.

By looking at the consistency of neighboring points, they can estimate the reliability of the models.

Aligning the representations

Foundation models map data points to what is known as a representation space. One way to think about this space is as a sphere. Each model maps similar data points to the same part of its sphere, so images of cats go in one place and images of dogs go in another.

But each model would map animals differently in its own sphere, so while cats may be grouped near the South Pole of one sphere, another model could map cats somewhere in the Northern Hemisphere.

The researchers use the neighboring points like anchors to align those spheres so they can make the representations comparable. If a data point’s neighbors are consistent across multiple representations, then one should be confident about the reliability of the model’s output for that point.

When they tested this approach on a wide range of classification tasks, they found that it was much more consistent than baselines. Plus, it wasn’t tripped up by challenging test points that caused other methods to fail.

Moreover, their approach can be used to assess reliability for any input data, so one could evaluate how well a model works for a particular type of individual, such as a patient with certain characteristics.

“Even if the models all have average performance overall, from an individual point of view, you’d prefer the one that works best for that individual,” Wang says.

However, one limitation comes from the fact that they must train an ensemble of foundation models, which is computationally expensive. In the future, they plan to find more efficient ways to build multiple models, perhaps by using small perturbations of a single model.

“With the current trend of using foundational models for their embeddings to support various downstream tasks — from fine-tuning to retrieval augmented generation — the topic of quantifying uncertainty at the representation level is increasingly important, but challenging, as embeddings on their own have no grounding. What matters instead is how embeddings of different inputs are related to one another, an idea that this work neatly captures through the proposed neighborhood consistency score,” says Marco Pavone, an associate professor in the Department of Aeronautics and Astronautics at Stanford University, who was not involved with this work. “This is a promising step towards high quality uncertainty quantifications for embedding models, and I’m excited to see future extensions which can operate without requiring model-ensembling to really enable this approach to scale to foundation-size models.”

This work is funded, in part, by the MIT-IBM Watson AI Lab, MathWorks, and Amazon.


Professor Emeritus John Vander Sande, microscopist, entrepreneur, and admired mentor, dies at 80

A trailblazer in electron microscopy, Vander Sande is remembered for his dedication to teaching, service, and global collaboration.


MIT Professor Emeritus John B. Vander Sande, a pioneer in electron microscopy and beloved educator and advisor known for his warmth and empathetic instruction, died June 28 in Newbury, Massachusetts. He was 80.

The Cecil and Ida Green Distinguished Professor in the Department of Materials Science and Engineering (DMSE), Vander Sande was a physical metallurgist, studying the physical properties and structure of metals and alloys. His long career included a major entrepreneurial pursuit, launching American Superconductor; forming international academic partnerships; and serving in numerous administrative roles at MIT and, after his retirement, one in Iceland.

Vander Sande’s interests encompassed more than science and technology; a self-taught scholar on 17th- and 18th-century furniture, he boasts a production credit in the 1996 film “The Crucible.”

He is perhaps best remembered for bringing the first scanning transmission electron microscope (STEM) into the United States. This powerful microscope uses a beam of electrons to scan material samples and investigate their structure and composition.

“John was the person who really built up what became MIT’s modern microscopy expertise,” says Samuel M. Allen, the POSCO Professor Emeritus of Physical Metallurgy. Vander Sande studied electron microscopy during a postdoctoral fellowship at Oxford University in England with luminaries Sir Peter Hirsch and Colin Humphreys. “The people who wrote the first book on transmission electron microscopy were all there at Oxford, and John basically brought that expertise to MIT in his teaching and mentoring.”

Born in Baltimore, Maryland, in 1944, Vander Sande grew up in Westwood, New Jersey. He studied mechanical engineering at Stevens Institute of Technology, earning a bachelor’s degree in 1966, and switched to materials science and engineering at Northwestern University, receiving a PhD in 1970. Following his time at Oxford, Vander Sande joined MIT as assistant professor in 1971.

A vision for advanced microscopy

At MIT, Vander Sande became known as a leading practitioner of weak-beam microscopy, a technique refined by Hirsch to improve images of dislocations, tiny imperfections in crystalline materials that help researchers determine why materials fail.

His procurement of the STEM instrument from the U.K. company Vacuum Generators in the mid-1970s was a substantial innovation, allowing researchers to visualize individual atoms and identify chemical elements in materials.

“He showed the capabilities of new techniques, like scanning transmission electron microscopy, in understanding the physics and chemistry of materials at the nanoscale,” says Yet-Ming Chiang, the Kyocera Professor of Ceramics at DMSE. Today, MIT.nano stands as one of the world’s foremost facilities for advanced microscopy techniques. “He paved the way, at MIT, certainly, and more broadly, to those state-of-the-art instruments that we have today.”

The director of a microscopy laboratory at MIT, Vander Sande used instruments like that early STEM and its successors to study how manufacturing processes affect material structure and properties.

One focus was rapid solidification, which involves cooling materials quickly to enhance their properties. Tom Kelly, a PhD student in the late 1970s, worked with Vander Sande to explore how fast-cooling molten metal as powder changes its internal structure. They discovered that “precipitates,” or small particles formed during the rapid cooling, made the metal stronger.

“It took me at least a year to finally get some success. But we did succeed,” says Kelly, CEO of STEAM Instruments, a startup that is developing mass spectrometry technology, which measures and analyzes atoms emitted by substances. “That was John who brought that project and the solution to the table.”

Using his deep expertise in metals and other materials, including superconducting oxides, which can conduct electricity when cooled to low temperatures, Vander Sande co-founded American Superconductor with fellow DMSE faculty member Greg Yurek in 1987. The company produced high-temperature superconducting wires now used in renewable energy technology.

“In the MIT entrepreneurial ecosystem, American Superconductor was a pioneer,” says Chiang, who was part of the startup’s co-founding membership. “It was one of the early companies that was formed on the basis of research at MIT, in which faculty spun out a company, as opposed to graduates starting companies.”

To teach them is to know them

While Yurek left MIT to lead the American Superconductor full time as CEO, Vander Sande stayed on the faculty at DMSE, remaining a consultant to the company and board member for many years.

That comes as no surprise to his students, who recall a passionate and devoted educator and mentor.

“He was a terrific teacher,” says Frank Gayle, a former PhD student of Vander Sande’s who recently retired from his job as director at the National Institute of Standards and Technology. “He would take the really complex subjects, super mathematical and complicated, and he would teach them in a way that you felt comfortable as a student learning them. He really had a terrific knack for that.”

Chiang said Vander Sande was an “exceptionally clear” lecturer who would use memorable imagery to get concepts across, like comparing heterogenous nanoparticles, tiny particles that have a varied structure or composition, to a black-and-white Holstein cow. “Hard to forget,” Chiang says.

Powering Vander Sande’s teaching, Gayle said, was an aptitude for knowing the people he was teaching, for recognizing their backgrounds and what they knew and didn’t know. He likened Vander Sande to a dad on Take Your Kid to Work Day, demystifying an unfamiliar world. “He had some way of doing that, and then he figured out how to get the pieces together to make it comprehensible.”

He brought a similar talent to mentorship, with an emphasis on the individual rather than the project, Gayle says. “He really worked with people to encourage them to do creative things and encouraged their creativity.”

Kelly, who was a University of Wisconsin professor before becoming a repeat entrepreneur, says Vander Sande was an exceptional role model for young grad students.

“When you see these people who’ve accomplished a lot, you’re afraid to even talk to them,” he says. “But in reality, they’re regular people. One of the things I learned from John was that he’s just a regular person who does good work. I realized that, Hey, I can be a regular person and do good work, too.”

Another former grad student, Matt Libera, says he learned as much about life from Vander Sande as he did about materials science and engineering.

“Because he was not just a scientist-engineer, but really a well-rounded human being and shared a lot of experience and advice that went beyond just the science,” says Libera, a materials science and engineering professor at Stevens Institute of Technology, Vander Sande’s alma mater.

“A rare talent”

Vander Sande was equally dedicated to MIT and his department. In DMSE, he was on multiple committees, on undergraduates and curriculum development, and in 1991 he was appointed associate dean of the School of Engineering. He served in the position until 1999, taking over as acting dean twice.

“I remember that that took up a huge amount of his time,” Chiang says. Vander Sande lived in Newbury, Massachusetts, and he and his wife, Marie-Teresa, who long worked for MIT’s Industrial Liaison Program, would travel together to Cambridge by car. “He once told me that he did a lot of the work related to his deanship during that long commute back and forth from Newbury.”

Gayle says Vander Sande’s remarkable communication and people skills are what made him a good fit for leadership roles. “He had a rare talent for those things.”

He also was a bridge from MIT to the rest of the world. Vander Sande played a leading role in establishing the Singapore-MIT Alliance for Research and Technology, a teaching partnership that set up Institute-modeled graduate programs at Singaporean universities. And he was the director of MIT’s half of the Cambridge-MIT Institute, a collaboration with the University of Cambridge in the U.K. that focused on student and faculty exchanges, integrated research, and professional development. Retiring from MIT in 2006, he pursued academic projects in Ecuador, Morocco, and Iceland, and served as acting provost of Reykjavik University from 2009 to 2010.

He had numerous interests outside work, including college football and sports cars, but his greatest passion was for antiques, mainly early American furniture.

A self-taught expert in antiquarian arts, he gave lectures on connoisseurship and attended auctions and antique shows. His interest extended to his home, built in 1697, which had low ceilings that were inconvenient for the 6-foot-1 Vander Sande.

So respected was he for his expertise that the production crew for 20th Century Fox’s “The Crucible” sought him out. The film, about the Salem, Massachusetts, witch trials, was set in 1692. The crew made copies of furniture from his collection, and Vander Sande consulted on set design and decoration to ensure historical accuracy.

His passion extended beyond just historical artifacts, says Professor Emeritus Allen. He was profoundly interested in learning about the people behind them.

“He liked to read firsthand accounts, letters and stuff,” he says. “His real interest was trying to understand how people two centuries ago or more thought, what their lives were like. It wasn’t just that he was an antiques collector.”

Vander Sande is survived by his wife, Marie-Teresa Vander Sande; his son, John Franklin VanderSande, and his wife, Melanie; his daughter, Rosse Marais VanderSande Ellis, and her husband, Zak Ellis; and grandchildren Gabriel Rhys Pelletier, Sophia Marais VanderSande, and John Christian VanderSande.


Polina Anikeeva named head of the Department of Materials Science and Engineering

Anikeeva, who conducts research at the intersection of materials science, electronics, and neurobiology, succeeds Caroline Ross.


Polina Anikeeva PhD ’09, the Matoula S. Salapatas Professor at MIT, has been named the new head of MIT's Department of Materials Science and Engineering (DMSE), effective July 1.

“Professor Anikeeva’s passion and dedication as both a researcher and educator, as well as her impressive network of connections across the wider Institute, make her incredibly well suited to lead DMSE,” says Anantha Chandrakasan, chief innovation and strategy officer, dean of engineering, and Vannevar Bush Professor of Electrical Engineering and Computer Science.

In addition to serving as a professor in DMSE, Anikeeva is a professor of brain and cognitive sciences, director of the K. Lisa Yang Brain-Body Center, a member of the McGovern Institute for Brain Research, and associate director of MIT’s Research Laboratory of Electronics.

Anikeeva leads the MIT Bioelectronics Group, which focuses on developing magnetic and optoelectronic tools to study neural communication in health and disease. Her team applies magnetic nanomaterials and fiber-based devices to reveal physiological processes underlying brain-organ communication, with particular focus on gut-brain circuits. Their goal is to develop minimally invasive treatments for a range of neurological, psychiatric, and metabolic conditions.

Anikeeva’s research sits at the intersection of materials chemistry, electronics, and neurobiology. By bridging these disciplines, Anikeeva and her team are deepening our understanding and treatment of complex neurological disorders. Her approach has led to the creation of optoelectronic and magnetic devices that can record neural activity and stimulate neurons during behavioral studies.

Throughout her career, Anikeeva has been recognized with numerous awards for her groundbreaking research. Her honors include receiving an NSF CAREER Award, DARPA Young Faculty Award, and the Pioneer Award from the NIH's High-Risk, High-Reward Research Program. MIT Technology Review named her one of the 35 Innovators Under 35 and the Vilcek Foundation awarded her the Prize for Creative Promise in Biomedical Science.

Her impact extends beyond the laboratory and into the classroom, where her dedication to education has earned her the Junior Bose Teaching Award, the MacVicar Faculty Fellowship, and an MITx Prize for Teaching and Learning in MOOCs. Her entrepreneurial spirit was acknowledged with a $100,000 prize in the inaugural MIT Faculty Founders Initiative Prize Competition, recognizing her pioneering work in neuroprosthetics.

In 2023, Anikeeva co-founded Neurobionics Inc., which develops flexible fibers that can interface with the brain — opening new opportunities for sensing and therapeutics. The team has presented their technologies at MIT delta v Demo Day and won $50,000 worth of lab space at the LabCentral Ignite Golden Ticket pitch competition. Anikeeva serves as the company’s scientific advisor.

Anikeeva earned her bachelor's degree in physics at St. Petersburg State Polytechnic University in Russia. She continued her education at MIT, where she received her PhD in materials science and engineering. Vladimir Bulović, director of MIT.nano and the Fariborz Maseeh Chair in Emerging Technology, served as Anikeeva’s doctoral advisor. After completing a postdoctoral fellowship at Stanford University, working on devices for optical stimulation and recording of neural activity, Anikeeva returned to MIT as a faculty member in 2011.

Anikeeva succeeds Caroline Ross, the Ford Professor of Engineering, who has served as interim department head since August 2023.

“Thanks to Professor Ross’s steadfast leadership, DMSE has continued to thrive during this period of transition. I’m incredibly grateful for her many contributions and long-standing commitment to strengthening the DMSE community,” adds Chandrakasan. 


MIT OpenCourseWare “changed how I think about teaching and what a university is”

Bernardo Picão, a graduate student in physics, has turned to MIT Open Learning’s resources throughout his educational journey.


Bernardo Picão has been interested in online learning since the early days of YouTube, when his father showed him a TED Talk. But it was with MIT Open Learning that he realized just how transformational digital resources can be. 

“YouTube was my first introduction to the idea that you can actually learn stuff via the internet,” Picão says. “So, when I became interested in mathematics and physics when I was 15 or 16, I turned to the internet and stumbled upon some playlists from MIT OpenCourseWare and went from there.”

OpenCourseWare, part of MIT Open Learning, offers free online educational resources from over 2,500 MIT undergraduate and graduate courses. Since discovering it, Picão has explored linear algebra with Gilbert Strang, professor emeritus of mathematics — whom Picão calls “a legend” — and courses on metaphysics, functional analysis, quantum field theory, and English. He has returned to OpenCourseWare throughout his educational journey, which includes undergraduate studies in France and Portugal. Some courses provided different perspectives on material he was learning in his classes, while others filled gaps in his knowledge or satisfied his curiosity. 

Overall, Picão says that MIT resources made him a more robust scientist. He is currently completing a master’s degree in physics at the Instituto Superior Técnico in Lisbon, Portugal, where he researches prominent lattice quantum chromodynamics, an approach to the study of quarks that uses precise computer simulations. After completing his master’s degree, Picão says he will continue to a doctoral program in the field. 

At a recent symposium in Lisbon, Picão attended a lecture given by someone he had first seen in an OpenCourseWare video — Krishna Rajagopal, the William A. M. Burden Professor of Physics and former dean for digital learning at MIT Open Learning. There, he took the opportunity to thank Rajagopal for his support of OpenCourseWare, which Picão says is an important part of MIT’s mission as a leader in education.

In addition to the range of subjects covered by OpenCourseWare, Picão praises the variety of instructors. All the courses are well-constructed, he says, but sometimes learners will connect with certain instructors or benefit from a particular presentation style. Since OpenCourseWare and other Open Learning programs offer such a wide range of free educational resources from MIT, learners can explore similar courses from different instructors to get new perspectives and round out their knowledge. 

While he enjoys his research, Picão’s passion is teaching. OpenCourseWare has helped him with that too, by providing models for how to teach math and science and how to connect with learners of different abilities and backgrounds. 

“I’m a very philosophical person,” he says. “I used to think that knowledge was intrinsically secluded in the large bindings of books, beyond the classroom walls, or inside the idiosyncratic minds of professors. OpenCourseWare changed how I think about teaching and what a university is — the point is not to keep knowledge inside of it, but to spread it.”

Picão, now a teaching assistant at his institution, has been teaching since his days as a high school student tutoring his classmates or talking with members of his family. 

“I spent my youth sharing my knowledge with my grandmother and my extended family, including people who weren’t able to attend school past the fourth grade,” he says. “Seeing them get excited about knowledge is the coolest thing. Open Learning scales that up to the rest of the world and that can have an incredible impact.”

The ability to learn from MIT experts has benefited Picão, deepening his understanding of the complex subjects that interest him. But, he acknowledges, he is a person who has access to high-quality instruction even without Open Learning. For learners who do not have that access, Open Learning is invaluable. 

“It's hard to overstate the importance of such a project. MIT’s OpenCourseware and Open Learning profoundly shift how students all over the world can perceive their relationship with education: Besides an internet connection, the only requirement is the curiosity to explore the hundreds of expertly crafted courses and worksheets, perfect for self-studying,” says Picão. 

He continues, “People may find OpenCourseWare and think it is too good to be true. Why would such a prestigious institution break down the barriers to scientific education and commit to open-access, free resources?  I want people to know: There is no catch. Sharing is the point.” 


Study reveals how an anesthesia drug induces unconsciousness

Propofol, a drug commonly used for general anesthesia, derails the brain’s normal balance between stability and excitability.


There are many drugs that anesthesiologists can use to induce unconsciousness in patients. Exactly how these drugs cause the brain to lose consciousness has been a longstanding question, but MIT neuroscientists have now answered that question for one commonly used anesthesia drug.

Using a novel technique for analyzing neuron activity, the researchers discovered that the drug propofol induces unconsciousness by disrupting the brain’s normal balance between stability and excitability. The drug causes brain activity to become increasingly unstable, until the brain loses consciousness.

“The brain has to operate on this knife’s edge between excitability and chaos. It’s got to be excitable enough for its neurons to influence one another, but if it gets too excitable, it spins off into chaos. Propofol seems to disrupt the mechanisms that keep the brain in that narrow operating range,” says Earl K. Miller, the Picower Professor of Neuroscience and a member of MIT’s Picower Institute for Learning and Memory.

The new findings, reported today in Neuron, could help researchers develop better tools for monitoring patients as they undergo general anesthesia.

Miller and Ila Fiete, a professor of brain and cognitive sciences, the director of the K. Lisa Yang Integrative Computational Neuroscience Center (ICoN), and a member of MIT’s McGovern Institute for Brain Research, are the senior authors of the new study. MIT graduate student Adam Eisen and MIT postdoc Leo Kozachkov are the lead authors of the paper.

Losing consciousness

Propofol is a drug that binds to GABA receptors in the brain, inhibiting neurons that have those receptors. Other anesthesia drugs act on different types of receptors, and the mechanism for how all of these drugs produce unconsciousness is not fully understood.

Miller, Fiete, and their students hypothesized that propofol, and possibly other anesthesia drugs, interfere with a brain state known as “dynamic stability.” In this state, neurons have enough excitability to respond to new input, but the brain is able to quickly regain control and prevent them from becoming overly excited.

Previous studies of how anesthesia drugs affect this balance have found conflicting results: Some suggested that during anesthesia, the brain shifts toward becoming too stable and unresponsive, which leads to loss of consciousness. Others found that the brain becomes too excitable, leading to a chaotic state that results in unconsciousness.

Part of the reason for these conflicting results is that it has been difficult to accurately measure dynamic stability in the brain. Measuring dynamic stability as consciousness is lost would help researchers determine if unconsciousness results from too much stability or too little stability.

In this study, the researchers analyzed electrical recordings made in the brains of animals that received propofol over an hour-long period, during which they gradually lost consciousness. The recordings were made in four areas of the brain that are involved in vision, sound processing, spatial awareness, and executive function.

These recordings covered only a tiny fraction of the brain’s overall activity, so to overcome that, the researchers used a technique called delay embedding. This technique allows researchers to characterize dynamical systems from limited measurements by augmenting each measurement with measurements that were recorded previously.

Using this method, the researchers were able to quantify how the brain responds to sensory inputs, such as sounds, or to spontaneous perturbations of neural activity.

In the normal, awake state, neural activity spikes after any input, then returns to its baseline activity level. However, once propofol dosing began, the brain started taking longer to return to its baseline after these inputs, remaining in an overly excited state. This effect became more and more pronounced until the animals lost consciousness.

This suggests that propofol’s inhibition of neuron activity leads to escalating instability, which causes the brain to lose consciousness, the researchers say.

Better anesthesia control

To see if they could replicate this effect in a computational model, the researchers created a simple neural network. When they increased the inhibition of certain nodes in the network, as propofol does in the brain, network activity became destabilized, similar to the unstable activity the researchers saw in the brains of animals that received propofol.

“We looked at a simple circuit model of interconnected neurons, and when we turned up inhibition in that, we saw a destabilization. So, one of the things we’re suggesting is that an increase in inhibition can generate instability, and that is subsequently tied to loss of consciousness,” Eisen says.

As Fiete explains, “This paradoxical effect, in which boosting inhibition destabilizes the network rather than silencing or stabilizing it, occurs because of disinhibition. When propofol boosts the inhibitory drive, this drive inhibits other inhibitory neurons, and the result is an overall increase in brain activity.”

The researchers suspect that other anesthetic drugs, which act on different types of neurons and receptors, may converge on the same effect through different mechanisms — a possibility that they are now exploring.

If this turns out to be true, it could be helpful to the researchers’ ongoing efforts to develop ways to more precisely control the level of anesthesia that a patient is experiencing. These systems, which Miller is working on with Emery Brown, the Edward Hood Taplin Professor of Medical Engineering at MIT, work by measuring the brain’s dynamics and then adjusting drug dosages accordingly, in real-time.

“If you find common mechanisms at work across different anesthetics, you can make them all safer by tweaking a few knobs, instead of having to develop safety protocols for all the different anesthetics one at a time,” Miller says. “You don’t want a different system for every anesthetic they’re going to use in the operating room. You want one that’ll do it all.”

The researchers also plan to apply their technique for measuring dynamic stability to other brain states, including neuropsychiatric disorders.

“This method is pretty powerful, and I think it’s going to be very exciting to apply it to different brain states, different types of anesthetics, and also other neuropsychiatric conditions like depression and schizophrenia,” Fiete says.

The research was funded by the Office of Naval Research, the National Institute of Mental Health, the National Institute of Neurological Disorders and Stroke, the National Science Foundation Directorate for Computer and Information Science and Engineering, the Simons Center for the Social Brain, the Simons Collaboration on the Global Brain, the JPB Foundation, the McGovern Institute, and the Picower Institute. 


Q&A: What past environmental success can teach us about solving the climate crisis

In a new book, Professor Susan Solomon uses previous environmental successes as a source of hope and guidance for mitigating climate change.


Susan Solomon, MIT professor of Earth, atmospheric, and planetary sciences (EAPS) and of chemistry, played a critical role in understanding how a class of chemicals known as chlorofluorocarbons were creating a hole in the ozone layer. Her research was foundational to the creation of the Montreal Protocol, an international agreement established in the 1980s that phased out products releasing chlorofluorocarbons. Since then, scientists have documented signs that the ozone hole is recovering thanks to these measures.

Having witnessed this historical process first-hand, Solomon, the Lee and Geraldine Martin Professor of Environmental Studies, is aware of how people can come together to make successful environmental policy happen. Using her story, as well as other examples of success — including combating smog, getting rid of DDT, and more — Solomon draws parallels from then to now as the climate crisis comes into focus in her new book, Solvable: How we Healed the Earth and How we can do it Again.”

Solomon took a moment to talk about why she picked the stories in her book, the students who inspired her, and why we need hope and optimism now more than ever.

Q: You have first-hand experience seeing how we’ve altered the Earth, as well as the process of creating international environmental policy. What prompted you to write a book about your experiences?

A: Lots of things, but one of the main ones is the things that I see in teaching. I have taught a class called Science, Politics and Environmental Policy for many years here at MIT. Because my emphasis is always on how we’ve actually fixed problems, students come away from that class feeling hopeful, like they really want to stay engaged with the problem.

It strikes me that students today have grown up in a very contentious and difficult era in which they feel like nothing ever gets done. But stuff does get done, even now. Looking at how we did things so far really helps you to see how we can do things in the future.

Q: In the book, you use five different stories as examples of successful environmental policy, and then end talking about how we can apply these lessons to climate change. Why did you pick these five stories?

A: I picked some of them because I’m closer to those problems in my own professional experience, like ozone depletion and smog. I did other issues partly because I wanted to show that even in the 21st century, we’ve actually got some stuff done — that’s the story of the Kigali Amendment to the Montreal Protocol, which is a binding international agreement on some greenhouse gases.

Another chapter is on DDT. One of the reasons I included that is because it had an enormous effect on the birth of the environmental movement in the United States. Plus, that story allows you to see how important the environmental groups can be.

Lead in gasoline and paint is the other one. I find it a very moving story because the idea that we were poisoning millions of children and not even realizing it is so very, very sad. But it’s so uplifting that we did figure out the problem, and it happened partly because of the civil rights movement, that made us aware that the problem was striking minority communities much more than non-minority communities.

Q: What surprised you the most during your research for the book?

A: One of the things that that I didn’t realize and should have, was the outsized role played by one single senator, Ed Muskie of Maine. He made pollution control his big issue and devoted incredible energy to it. He clearly had the passion and wanted to do it for many years, but until other factors helped him, he couldn’t. That's where I began to understand the role of public opinion and the way in which policy is only possible when public opinion demands change.

Another thing about Muskie was the way in which his engagement with these issues demanded that science be strong. When I read what he put into congressional testimony I realized how highly he valued the science. Science alone is never enough, but it’s always necessary. Over the years, science got a lot stronger, and we developed ways of evaluating what the scientific wisdom across many different studies and many different views actually is. That’s what scientific assessment is all about, and it’s crucial to environmental progress.

Q: Throughout the book you argue that for environmental action to succeed, three things must be met which you call the three Ps: a threat much be personal, perceptible, and practical. Where did this idea come from?

A: My observations. You have to perceive the threat: In the case of the ozone hole, you could perceive it because those false-color images of the ozone loss were so easy to understand, and it was personal because few things are scarier than cancer, and a reduced ozone layer leads to too much sun, increasing skin cancers. Science plays a role in communicating what can be readily understood by the public, and that’s important to them perceiving it as a serious problem.

Nowadays, we certainly perceive the reality of climate change. We also see that it’s personal. People are dying because of heat waves in much larger numbers than they used to; there are horrible problems in the Boston area, for example, with flooding and sea level rise. People perceive the reality of the problem and they feel personally threatened.

The third P is practical: People have to believe that there are practical solutions. It’s interesting to watch how the battle for hearts and minds has shifted. There was a time when the skeptics would just attack the whole idea that the climate was changing. Eventually, they decided ‘we better accept that because people perceive it, so let’s tell them that it’s not caused by human activity.’ But it’s clear enough now that human activity does play a role. So they’ve moved on to attacking that third P, that somehow it’s not practical to have any kind of solutions. This is progress! So what about that third P?

What I tried to do in the book is to point out some of the ways in which the problem has also become eminently practical to deal with in the last 10 years, and will continue to move in that direction. We’re right on the cusp of success, and we just have to keep going. People should not give in to eco despair; that’s the worst thing you could do, because then nothing will happen. If we continue to move at the rate we have, we will certainly get to where we need to be.

Q: That ties in very nicely with my next question. The book is very optimistic; what gives you hope?

A: I’m optimistic because I’ve seen so many examples of where we have succeeded, and because I see so many signs of movement right now that are going to push us in the same direction.

If we had kept conducting business as usual as we had been in the year 2000, we’d be looking at 4 degrees of future warming. Right now, I think we're looking at 3 degrees. I think we can get to 2 degrees. We have to really work on it, and we have to get going seriously in the next decade, but globally right now over 30 percent of our energy is from renewables. That's fantastic! Let’s just keep going.

Q: Throughout the book, you show that environmental problems won’t be solved by individual actions alone, but requires policy and technology driving. What individual actions can people take to help push for those bigger changes?

A: A big one is choose to eat more sustainably; choose alternative transportation methods like public transportation or reducing the amount of trips that you make. Older people usually have retirement investments, you can shift them over to a social choice funds and away from index funds that end up funding companies that you might not be interested in. You can use your money to put pressure: Amazon has been under a huge amount of pressure to cut down on their plastic packaging, mainly coming from consumers. They’ve just announced they’re not going to use those plastic pillows anymore. I think you can see lots of ways in which people really do matter, and we can matter more.

Q: What do you hope people take away from the book?

A: Hope for their future and resolve to do the best they can getting engaged with it.


Marking a milestone: Dedication ceremony celebrates the new MIT Schwarzman College of Computing building

Members of the MIT community, supporters, and guests commemorate the opening of the new college headquarters.


The MIT Stephen A. Schwarzman College of Computing recently marked a significant milestone as it celebrated the completion and inauguration of its new building on Vassar Street with a dedication ceremony.

Attended by members of the MIT community, distinguished guests, and supporters, the ceremony provided an opportunity to reflect on the transformative gift that initiated the biggest change to MIT’s institutional structure in over 70 years. The gift, made by Stephen A. Schwarzman, the chair, CEO, and co-founder of Blackstone, one of the world’s largest alternative investment firms, was the foundation for establishing the college.

MIT President Sally Kornbluth told the audience that the “success of the MIT Stephen A. Schwarzman College of Computing is a testament to Steve’s vision.” She pointed out that the new building — with capacity for 50 computing research groups — will foster a remarkable confluence of knowledge and cross-pollination of ideas. “The college will help MIT direct this expertise towards the biggest challenges humanity now faces,” she added, “from the health of our species and our planet to the social, economic, and ethical implications of new technologies.”

Expressing gratitude for the chance to engage with MIT, Schwarzman remarked, “You don’t get many opportunities in life to participate in some minor way to change the course of one of the great technologies that’s going to impact people.”

Schwarzman said that his motivation for supporting the college stemmed in part from trips he had taken to China, where he witnessed increased investment in artificial intelligence. He became concerned that he didn’t see the same level of development in the United States and wanted to ensure that the country would be at the leading edge of AI. He also spoke about the importance of advancing AI while prioritizing ethical considerations to mitigate potential risks.

He described his involvement with the college as “the most marvelous adventure” and shared how much he has enjoyed “meeting the fascinating people at MIT and learning about what you do here and the way you think.” He added: “You’re really making enormous changes for the benefit of society.”

Reflecting on the thought process during his tenure that culminated in the conceptualization of the college, MIT President Emeritus L. Rafael Reif recounted the conversations he had about the idea with Schwarzman, whom he called a “perfect partner.” He detailed their collaborative efforts to transform the vision into tangible reality and emphasized how Schwarzman has “an amazing ability to look at what appears to be a hopelessly complex situation and distill it to its essence quickly.”

After almost a year of engaging in discussions with Schwarzman as well as with members of MIT’s leadership and faculty, the Institute announced the formation of the MIT Stephen A. Schwarzman College of Computing in October 2018.

To honor Schwarzman’s pivotal role in envisioning the college, Reif presented him with two gifts: A sketch of the early building concept by the architects and a photograph of the building lobby captured shortly after it opened in late January. “Thank you, Steve, for making all of this possible,” Reif said.

Appointed the inaugural dean of the MIT Schwarzman College of Computing in 2019, Dan Huttenlocher, who is also the Henry Ellis Warren Professor of Electrical Engineering and Computer Science, opened the festivities and spoke about the building as a physical manifestation of the college’s three-fold mission: to advance the forefront of computing with fields across MIT; fortify core computer science and artificial intelligence leadership; and advance social, ethical, and policy dimensions of computing.

He also conveyed his appreciation to all those who spent countless hours on the planning, design, and construction of Building 45, including key partners in MIT Campus Construction and Campus Planning; Skidmore, Owings & Merrill; and Suffolk Construction.

“It fills me with immense satisfaction and pride to see the vibrant activity of the MIT students, researchers, faculty, and staff who spend time in this building,” said Huttenlocher. “It’s really amazing to see this building come to life and become a resource for so many across the MIT campus and beyond.”

In addition, Huttenlocher thanked Anantha Chandrakasan, MIT chief innovation and strategy officer, dean of the School of Engineering, and the Vannevar Bush Professor of Electrical Engineering and Computer Science, for his early involvement with the college, and Asu Ozdaglar, deputy dean of the MIT Schwarzman College of Computing and head of the Department of Electrical Engineering and Computer Science, for her leadership throughout the college’s development.


Machine learning and the microscope

PhD student Xinyi Zhang is developing computational tools for analyzing cells in the age of multimodal data.


With recent advances in imaging, genomics and other technologies, the life sciences are awash in data. If a biologist is studying cells taken from the brain tissue of Alzheimer’s patients, for example, there could be any number of characteristics they want to investigate — a cell’s type, the genes it’s expressing, its location within the tissue, or more. However, while cells can now be probed experimentally using different kinds of measurements simultaneously, when it comes to analyzing the data, scientists usually can only work with one type of measurement at a time.

Working with “multimodal” data, as it’s called, requires new computational tools, which is where Xinyi Zhang comes in.

The fourth-year MIT PhD student is bridging machine learning and biology to understand fundamental biological principles, especially in areas where conventional methods have hit limitations. Working in the lab of MIT Professor Caroline Uhler in the Department of Electrical Engineering and Computer Science, the Laboratory for Information and Decision Systems, and the Institute for Data, Systems, and Society, and collaborating with researchers at the Eric and Wendy Schmidt Center at the Broad Institute and elsewhere, Zhang has led multiple efforts to build computational frameworks and principles for understanding the regulatory mechanisms of cells.

“All of these are small steps toward the end goal of trying to answer how cells work, how tissues and organs work, why they have disease, and why they can sometimes be cured and sometimes not,” Zhang says.

The activities Zhang pursues in her down time are no less ambitious. The list of hobbies she has taken up at the Institute include sailing, skiing, ice skating, rock climbing, performing with MIT’s Concert Choir, and flying single-engine planes. (She earned her pilot’s license in November 2022.)

“I guess I like to go to places I’ve never been and do things I haven’t done before,” she says with signature understatement.

Uhler, her advisor, says that Zhang’s quiet humility leads to a surprise “in every conversation.”

“Every time, you learn something like, ‘Okay, so now she’s learning to fly,’” Uhler says. “It’s just amazing. Anything she does, she does for the right reasons. She wants to be good at the things she cares about, which I think is really exciting.”

Zhang first became interested in biology as a high school student in Hangzhou, China. She liked that her teachers couldn’t answer her questions in biology class, which led her to see it as the “most interesting” topic to study.

Her interest in biology eventually turned into an interest in bioengineering. After her parents, who were middle school teachers, suggested studying in the United States, she majored in the latter alongside electrical engineering and computer science as an undergraduate at the University of California at Berkeley.

Zhang was ready to dive straight into MIT’s EECS PhD program after graduating in 2020, but the Covid-19 pandemic delayed her first year. Despite that, in December 2022, she, Uhler, and two other co-authors published a paper in Nature Communications.

The groundwork for the paper was laid by Xiao Wang, one of the co-authors. She had previously done work with the Broad Institute in developing a form of spatial cell analysis that combined multiple forms of cell imaging and gene expression for the same cell while also mapping out the cell’s place in the tissue sample it came from — something that had never been done before.

This innovation had many potential applications, including enabling new ways of tracking the progression of various diseases, but there was no way to analyze all the multimodal data the method produced. In came Zhang, who became interested in designing a computational method that could.

The team focused on chromatin staining as their imaging method of choice, which is relatively cheap but still reveals a great deal of information about cells. The next step was integrating the spatial analysis techniques developed by Wang, and to do that, Zhang began designing an autoencoder.

Autoencoders are a type of neural network that typically encodes and shrinks large amounts of high-dimensional data, then expand the transformed data back to its original size. In this case, Zhang’s autoencoder did the reverse, taking the input data and making it higher-dimensional. This allowed them to combine data from different animals and remove technical variations that were not due to meaningful biological differences.

In the paper, they used this technology, abbreviated as STACI, to identify how cells and tissues reveal the progression of Alzheimer’s disease when observed under a number of spatial and imaging techniques. The model can also be used to analyze any number of diseases, Zhang says.

Given unlimited time and resources, her dream would be to build a fully complete model of human life. Unfortunately, both time and resources are limited. Her ambition isn’t, however, and she says she wants to keep applying her skills to solve the “most challenging questions that we don’t have the tools to answer.”

She’s currently working on wrapping up a couple of projects, one focused on studying neurodegeneration by analyzing frontal cortex imaging and another on predicting protein images from protein sequences and chromatin imaging.

“There are still many unanswered questions,” she says. “I want to pick questions that are biologically meaningful, that help us understand things we didn’t know before.”


Reasoning skills of large language models are often overestimated

New CSAIL research highlights how LLMs excel in familiar scenarios but struggle in novel ones, questioning their true reasoning abilities versus reliance on memorization.


When it comes to artificial intelligence, appearances can be deceiving. The mystery surrounding the inner workings of large language models (LLMs) stems from their vast size, complex training methods, hard-to-predict behaviors, and elusive interpretability.

MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) researchers recently peered into the proverbial magnifying glass to examine how LLMs fare with variations of different tasks, revealing intriguing insights into the interplay between memorization and reasoning skills. It turns out that their reasoning abilities are often overestimated.

The study compared “default tasks,” the common tasks a model is trained and tested on, with “counterfactual scenarios,” hypothetical situations deviating from default conditions — which models like GPT-4 and Claude can usually be expected to cope with. The researchers developed some tests outside the models’ comfort zones by tweaking existing tasks instead of creating entirely new ones. They used a variety of datasets and benchmarks specifically tailored to different aspects of the models' capabilities for things like arithmetic, chess, evaluating code, answering logical questions, etc.

When users interact with language models, any arithmetic is usually in base-10, the familiar number base to the models. But observing that they do well on base-10 could give us a false impression of them having strong competency in addition. Logically, if they truly possess good addition skills, you’d expect reliably high performance across all number bases, similar to calculators or computers. Indeed, the research showed that these models are not as robust as many initially think. Their high performance is limited to common task variants and suffer from consistent and severe performance drop in the unfamiliar counterfactual scenarios, indicating a lack of generalizable addition ability. 

The pattern held true for many other tasks like musical chord fingering, spatial reasoning, and even chess problems where the starting positions of pieces were slightly altered. While human players are expected to still be able to determine the legality of moves in altered scenarios (given enough time), the models struggled and couldn’t perform better than random guessing, meaning they have limited ability to generalize to unfamiliar situations. And much of their performance on the standard tasks is likely not due to general task abilities, but overfitting to, or directly memorizing from, what they have seen in their training data.

“We’ve uncovered a fascinating aspect of large language models: they excel in familiar scenarios, almost like a well-worn path, but struggle when the terrain gets unfamiliar. This insight is crucial as we strive to enhance these models’ adaptability and broaden their application horizons,” says Zhaofeng Wu, an MIT PhD student in electrical engineering and computer science, CSAIL affiliate, and the lead author on a new paper about the research. “As AI is becoming increasingly ubiquitous in our society, it must reliably handle diverse scenarios, whether familiar or not. We hope these insights will one day inform the design of future LLMs with improved robustness.”

Despite the insights gained, there are, of course, limitations. The study’s focus on specific tasks and settings didn’t capture the full range of challenges the models could potentially encounter in real-world applications, signaling the need for more diverse testing environments. Future work could involve expanding the range of tasks and counterfactual conditions to uncover more potential weaknesses. This could mean looking at more complex and less common scenarios. The team also wants to improve interpretability by creating methods to better comprehend the rationale behind the models’ decision-making processes.

“As language models scale up, understanding their training data becomes increasingly challenging even for open models, let alone proprietary ones,” says Hao Peng, assistant professor at the University of Illinois at Urbana-Champaign. “The community remains puzzled about whether these models genuinely generalize to unseen tasks, or seemingly succeed by memorizing the training data. This paper makes important strides in addressing this question. It constructs a suite of carefully designed counterfactual evaluations, providing fresh insights into the capabilities of state-of-the-art LLMs. It reveals that their ability to solve unseen tasks is perhaps far more limited than anticipated by many. It has the potential to inspire future research towards identifying the failure modes of today’s models and developing better ones.”

Additional authors include Najoung Kim, who is a Boston University assistant professor and Google visiting researcher, and seven CSAIL affiliates: MIT electrical engineering and computer science (EECS) PhD students Linlu Qiu, Alexis Ross, Ekin Akyürek SM ’21, and Boyuan Chen; former postdoc and Apple AI/ML researcher Bailin Wang; and EECS assistant professors Jacob Andreas and Yoon Kim. 

The team’s study was supported, in part, by the MIT–IBM Watson AI Lab, the MIT Quest for Intelligence, and the National Science Foundation. The team presented the work at the North American Chapter of the Association for Computational Linguistics (NAACL) last month.


MIT SHASS announces appointment of new heads for 2024-25

School of Humanities, Arts, and Social Sciences appoints new heads across multiple academic units.


The MIT School of Humanities, Arts, and Social Sciences (SHASS) has announced several changes to the leadership of its academic units for the 2024-25 academic year.

“I’m confident these outstanding members of the SHASS community will provide exceptional leadership. I’m excited to see each implement their vision for the future of their unit,” says Agustin Rayo, the Kenan Sahin Dean of MIT SHASS.


When to trust an AI model

More accurate uncertainty estimates could help users decide about how and when to use machine-learning models in the real world.


Because machine-learning models can give false predictions, researchers often equip them with the ability to tell a user how confident they are about a certain decision. This is especially important in high-stake settings, such as when models are used to help identify disease in medical images or filter job applications.

But a model’s uncertainty quantifications are only useful if they are accurate. If a model says it is 49 percent confident that a medical image shows a pleural effusion, then 49 percent of the time, the model should be right.

MIT researchers have introduced a new approach that can improve uncertainty estimates in machine-learning models. Their method not only generates more accurate uncertainty estimates than other techniques, but does so more efficiently.

In addition, because the technique is scalable, it can be applied to huge deep-learning models that are increasingly being deployed in health care and other safety-critical situations.

This technique could give end users, many of whom lack machine-learning expertise, better information they can use to determine whether to trust a model’s predictions or if the model should be deployed for a particular task.

“It is easy to see these models perform really well in scenarios where they are very good, and then assume they will be just as good in other scenarios. This makes it especially important to push this kind of work that seeks to better calibrate the uncertainty of these models to make sure they align with human notions of uncertainty,” says lead author Nathan Ng, a graduate student at the University of Toronto who is a visiting student at MIT.

Ng wrote the paper with Roger Grosse, an assistant professor of computer science at the University of Toronto; and senior author Marzyeh Ghassemi, an associate professor in the Department of Electrical Engineering and Computer Science and a member of the Institute of Medical Engineering Sciences and the Laboratory for Information and Decision Systems. The research will be presented at the International Conference on Machine Learning.

Quantifying uncertainty

Uncertainty quantification methods often require complex statistical calculations that don’t scale well to machine-learning models with millions of parameters. These methods also require users to make assumptions about the model and data used to train it.

The MIT researchers took a different approach. They use what is known as the minimum description length principle (MDL), which does not require the assumptions that can hamper the accuracy of other methods. MDL is used to better quantify and calibrate uncertainty for test points the model has been asked to label.

The technique the researchers developed, known as IF-COMP, makes MDL fast enough to use with the kinds of large deep-learning models deployed in many real-world settings.

MDL involves considering all possible labels a model could give a test point. If there are many alternative labels for this point that fit well, its confidence in the label it chose should decrease accordingly.

“One way to understand how confident a model is would be to tell it some counterfactual information and see how likely it is to believe you,” Ng says.

For example, consider a model that says a medical image shows a pleural effusion. If the researchers tell the model this image shows an edema, and it is willing to update its belief, then the model should be less confident in its original decision.

With MDL, if a model is confident when it labels a datapoint, it should use a very short code to describe that point. If it is uncertain about its decision because the point could have many other labels, it uses a longer code to capture these possibilities.

The amount of code used to label a datapoint is known as stochastic data complexity. If the researchers ask the model how willing it is to update its belief about a datapoint given contrary evidence, the stochastic data complexity should decrease if the model is confident.

But testing each datapoint using MDL would require an enormous amount of computation.

Speeding up the process

With IF-COMP, the researchers developed an approximation technique that can accurately estimate stochastic data complexity using a special function, known as an influence function. They also employed a statistical technique called temperature-scaling, which improves the calibration of the model’s outputs. This combination of influence functions and temperature-scaling enables high-quality approximations of the stochastic data complexity.

In the end, IF-COMP can efficiently produce well-calibrated uncertainty quantifications that reflect a model’s true confidence. The technique can also determine whether the model has mislabeled certain data points or reveal which data points are outliers.

The researchers tested their system on these three tasks and found that it was faster and more accurate than other methods.

“It is really important to have some certainty that a model is well-calibrated, and there is a growing need to detect when a specific prediction doesn’t look quite right. Auditing tools are becoming more necessary in machine-learning problems as we use large amounts of unexamined data to make models that will be applied to human-facing problems,” Ghassemi says.

IF-COMP is model-agnostic, so it can provide accurate uncertainty quantifications for many types of machine-learning models. This could enable it to be deployed in a wider range of real-world settings, ultimately helping more practitioners make better decisions.

“People need to understand that these systems are very fallible and can make things up as they go. A model may look like it is highly confident, but there are a ton of different things it is willing to believe given evidence to the contrary,” Ng says.

In the future, the researchers are interested in applying their approach to large language models and studying other potential use cases for the minimum description length principle. 


MIT ARCLab announces winners of inaugural Prize for AI Innovation in Space

The challenge asked teams to develop AI algorithms to track and predict satellites’ patterns of life in orbit using passively collected data


Satellite density in Earth’s orbit has increased exponentially in recent years, with lower costs of small satellites allowing governments, researchers, and private companies to launch and operate some 2,877 satellites into orbit in 2023 alone. This includes increased geostationary Earth orbit (GEO) satellite activity, which brings technologies with global-scale impact, from broadband internet to climate surveillance. Along with the manifold benefits of these satellite-enabled technologies, however, come increased safety and security risks, as well as environmental concerns. More accurate and efficient methods of monitoring and modeling satellite behavior are urgently needed to prevent collisions and other disasters.

To address this challenge, the MIT Astrodynamics, Space Robotic, and Controls Laboratory (ARCLab) launched the MIT ARCLab Prize for AI Innovation in Space: a first-of-its-kind competition asking contestants to harness AI to characterize satellites’ patterns of life (PoLs) — the long-term behavioral narrative of a satellite in orbit — using purely passively collected information. Following the call for participants last fall, 126 teams used machine learning to create algorithms to label and time-stamp the behavioral modes of GEO satellites over a six-month period, competing for accuracy and efficiency.

With support from the U.S. Department of the Air Force-MIT AI Accelerator, the challenge offers a total of $25,000. A team of judges from ARCLab and MIT Lincoln Laboratory evaluated the submissions based on clarity, novelty, technical depth, and reproducibility, assigning each entry a score out of 100 points. Now the judges have announced the winners and runners-up:

First prize: David Baldsiefen — Team Hawaii2024

With a winning score of 96, Baldsiefen will be awarded $10,000 and is invited to join the ARCLab team in presenting at a poster session at the Advanced Maui Optical and Space Surveillance Technologies (AMOS) Conference in Hawaii this fall. One evaluator noted, “Clear and concise report, with very good ideas such as the label encoding of the localizer. Decisions on the architectures and the feature engineering are well reasoned. The code provided is also well documented and structured, allowing an easy reproducibility of the experimentation.”

Second prize: Binh Tran, Christopher Yeung, Kurtis Johnson, Nathan Metzger — Team Millennial-IUP

With a score of 94.2, Y, Millennial-IUP will be awarded $5,000 and will also join the ARCLab team at the AMOS conference. One evaluator said, “The models chosen were sensible and justified, they made impressive efforts in efficiency gains… They used physics to inform their models and this appeared to be reproducible. Overall it was an easy to follow, concise report without much jargon.”

Third Prize: Isaac Haik and Francois Porcher — Team QR_Is

With a score of 94, Haik and Porcher will share the third prize of $3,000 and will also be invited to the AMOS conference with the ARCLab team. One evaluator noted, “This informative and interesting report describes the combination of ML and signal processing techniques in a compelling way, assisted by informative plots, tables, and sequence diagrams. The author identifies and describes a modular approach to class detection and their assessment of feature utility, which they correctly identify is not evenly useful across classes… Any lack of mission expertise is made up for by a clear and detailed discussion of the benefits and pitfalls of the methods they used and discussion of what they learned.”

The fourth- through seventh-place scoring teams will each receive $1,000 and a certificate of excellence.

“The goal of this competition was to foster an interdisciplinary approach to problem-solving in the space domain by inviting AI development experts to apply their skills in this new context of orbital capacity. And all of our winning teams really delivered — they brought technical skill, novel approaches, and expertise to a very impressive round of submissions.” says Professor Richard Linares, who heads ARCLab.

Active modeling with passive data

Throughout a GEO satellite’s time in orbit, operators issue commands to place them in various behavioral modes—station-keeping, longitudinal shifts, end-of-life behaviors, and so on. Satellite Patterns of Life (PoLs) describe on-orbit behavior composed of sequences of both natural and non-natural behavior modes.

ARCLab has developed a groundbreaking benchmarking tool for geosynchronous satellite pattern-of-life characterization and created the Satellite Pattern-of-Life Identification Dataset (SPLID), comprising real and synthetic space object data. The challenge participants used this tool to create algorithms that use AI to map out the on-orbit behaviors of a satellite.

The goal of the MIT ARCLab Prize for AI Innovation in Space is to encourage technologists and enthusiasts to bring innovation and new skills sets to well-established challenges in aerospace. The team aims to hold the competition in 2025 and 2026 to explore other topics and invite experts in AI to apply their skills to new challenges. 


Community members receive 2024 MIT Excellence Awards, Collier Medal, and Staff Award for Distinction in Service

Staff members receive recognition for their exceptional support of the MIT community.


On Wednesday, June 5, 13 individuals and four teams were awarded MIT Excellence Awards — the highest awards for staff at the Institute. Colleagues holding signs, waving pompoms, and cheering gathered in Kresge Auditorium to show their support for the honorees. In addition to the Excellence Awards, staff members were honored with the Collier Medal, the Staff Award for Distinction in Service, and the Gordon Y. Billard Award. 

The Collier Medal honors the memory of Officer Sean Collier, who gave his life protecting and serving MIT; it celebrates an individual or group whose actions demonstrate the importance of community. The Staff Award for Distinction in Service is presented to a staff member whose service results in a positive lasting impact on the Institute.

The Gordon Y. Billard Award is given annually to staff, faculty, or an MIT-affiliated individual(s) who has given "special service of outstanding merit performed for the Institute." This year, for the first time, this award was presented at the MIT Excellence Awards and Collier Medal celebration. 

The 2024 MIT Excellence Award recipients and their award categories are: 

The 2024 Collier Medal recipient was Benjamin B. Lewis, a graduate student in the Institute for Data, Systems and Society in the MIT Schwarzman College of Computing. Last spring, he founded the Cambridge branch of End Overdose, a nonprofit dedicated to reducing drug-related overdose deaths. Through his efforts, more than 600 members of the Greater Boston community, including many at MIT, have been trained to administer lifesaving treatment at critical moments.

This year’s recipient of the 2024 Staff Award for Distinction in Service was Diego F. Arango (Department of Custodial Services, Department of Facilities), daytime custodian in Building 46. He was nominated by no fewer than 36 staff, faculty, students, and researchers for creating a positive working environment and for offering “help whenever, wherever, and to whomever needs it.”

Three community members were honored with a 2024 Gordon Y. Billard Award

Presenters included President Sally Kornbluth; MIT Chief of Police John DiFava and Deputy Chief Steven DeMarco; Vice President for Human Resources Ramona Allen; Executive Vice President and Treasurer Glen Shor; Provost Cynthia Barnhart; Lincoln Laboratory director Eric Evans; Chancellor Melissa Nobles; and Dean of the School of Engineering Anantha Chandrakasan.

Visit the MIT Human Resources website for more information about the award recipients, categories, and to view photos and video of the event.


Study finds health risks in switching ships from diesel to ammonia fuel

Ammonia could be a nearly carbon-free maritime fuel, but without new emissions regulations, its impact on air quality could significantly impact human health.


As container ships the size of city blocks cross the oceans to deliver cargo, their huge diesel engines emit large quantities of air pollutants that drive climate change and have human health impacts. It has been estimated that maritime shipping accounts for almost 3 percent of global carbon dioxide emissions and the industry’s negative impacts on air quality cause about 100,000 premature deaths each year.

Decarbonizing shipping to reduce these detrimental effects is a goal of the International Maritime Organization, a U.N. agency that regulates maritime transport. One potential solution is switching the global fleet from fossil fuels to sustainable fuels such as ammonia, which could be nearly carbon-free when considering its production and use.

But in a new study, an interdisciplinary team of researchers from MIT and elsewhere caution that burning ammonia for maritime fuel could worsen air quality further and lead to devastating public health impacts, unless it is adopted alongside strengthened emissions regulations.

Ammonia combustion generates nitrous oxide (N2O), a greenhouse gas that is about 300 times more potent than carbon dioxide. It also emits nitrogen in the form of nitrogen oxides (NO and NO2, referred to as NOx), and unburnt ammonia may slip out, which eventually forms fine particulate matter in the atmosphere. These tiny particles can be inhaled deep into the lungs, causing health problems like heart attacks, strokes, and asthma.

The new study indicates that, under current legislation, switching the global fleet to ammonia fuel could cause up to about 600,000 additional premature deaths each year. However, with stronger regulations and cleaner engine technology, the switch could lead to about 66,000 fewer premature deaths than currently caused by maritime shipping emissions, with far less impact on global warming.

“Not all climate solutions are created equal. There is almost always some price to pay. We have to take a more holistic approach and consider all the costs and benefits of different climate solutions, rather than just their potential to decarbonize,” says Anthony Wong, a postdoc in the MIT Center for Global Change Science and lead author of the study.

His co-authors include Noelle Selin, an MIT professor in the Institute for Data, Systems, and Society and the Department of Earth, Atmospheric and Planetary Sciences (EAPS); Sebastian Eastham, a former principal research scientist who is now a senior lecturer at Imperial College London; Christine Mounaïm-Rouselle, a professor at the University of Orléans in France; Yiqi Zhang, a researcher at the Hong Kong University of Science and Technology; and Florian Allroggen, a research scientist in the MIT Department of Aeronautics and Astronautics. The research appears this week in Environmental Research Letters.

Greener, cleaner ammonia

Traditionally, ammonia is made by stripping hydrogen from natural gas and then combining it with nitrogen at extremely high temperatures. This process is often associated with a large carbon footprint. The maritime shipping industry is betting on the development of “green ammonia,” which is produced by using renewable energy to make hydrogen via electrolysis and to generate heat.

“In theory, if you are burning green ammonia in a ship engine, the carbon emissions are almost zero,” Wong says.

But even the greenest ammonia generates nitrous oxide (N2O), nitrogen oxides (NOx) when combusted, and some of the ammonia may slip out, unburnt. This nitrous oxide would escape into the atmosphere, where the greenhouse gas would remain for more than 100 years. At the same time, the nitrogen emitted as NOx and ammonia would fall to Earth, damaging fragile ecosystems. As these emissions are digested by bacteria, additional N2O  is produced.

NOx and ammonia also mix with gases in the air to form fine particulate matter. A primary contributor to air pollution, fine particulate matter kills an estimated 4 million people each year.

“Saying that ammonia is a ‘clean’ fuel is a bit of an overstretch. Just because it is carbon-free doesn’t necessarily mean it is clean and good for public health,” Wong says.

A multifaceted model

The researchers wanted to paint the whole picture, capturing the environmental and public health impacts of switching the global fleet to ammonia fuel. To do so, they designed scenarios to measure how pollutant impacts change under certain technology and policy assumptions.

From a technological point of view, they considered two ship engines. The first burns pure ammonia, which generates higher levels of unburnt ammonia but emits fewer nitrogen oxides. The second engine technology involves mixing ammonia with hydrogen to improve combustion and optimize the performance of a catalytic converter, which controls both nitrogen oxides and unburnt ammonia pollution.

They also considered three policy scenarios: current regulations, which only limit NOx emissions in some parts of the world; a scenario that adds ammonia emission limits over North America and Western Europe; and a scenario that adds global limits on ammonia and NOx emissions.

The researchers used a ship track model to calculate how pollutant emissions change under each scenario and then fed the results into an air quality model. The air quality model calculates the impact of ship emissions on particulate matter and ozone pollution. Finally, they estimated the effects on global public health.

One of the biggest challenges came from a lack of real-world data, since no ammonia-powered ships are yet sailing the seas. Instead, the researchers relied on experimental ammonia combustion data from collaborators to build their model.

“We had to come up with some clever ways to make that data useful and informative to both the technology and regulatory situations,” he says.

A range of outcomes

In the end, they found that with no new regulations and ship engines that burn pure ammonia, switching the entire fleet would cause 681,000 additional premature deaths each year.

“While a scenario with no new regulations is not very realistic, it serves as a good warning of how dangerous ammonia emissions could be. And unlike NOx, ammonia emissions from shipping are currently unregulated,” Wong says.

However, even without new regulations, using cleaner engine technology would cut the number of premature deaths down to about 80,000, which is about 20,000 fewer than are currently attributed to maritime shipping emissions. With stronger global regulations and cleaner engine technology, the number of people killed by air pollution from shipping could be reduced by about 66,000.

“The results of this study show the importance of developing policies alongside new technologies,” Selin says. “There is a potential for ammonia in shipping to be beneficial for both climate and air quality, but that requires that regulations be designed to address the entire range of potential impacts, including both climate and air quality.”

Ammonia’s air quality impacts would not be felt uniformly across the globe, and addressing them fully would require coordinated strategies across very different contexts. Most premature deaths would occur in East Asia, since air quality regulations are less stringent in this region. Higher levels of existing air pollution cause the formation of more particulate matter from ammonia emissions. In addition, shipping volume over East Asia is far greater than elsewhere on Earth, compounding these negative effects.

In the future, the researchers want to continue refining their analysis. They hope to use these findings as a starting point to urge the marine industry to share engine data they can use to better evaluate air quality and climate impacts. They also hope to inform policymakers about the importance and urgency of updating shipping emission regulations.

This research was funded by the MIT Climate and Sustainability Consortium.


Empowering future innovators through a social impact lens

The IDEAS Social Innovation Challenge helps students hone their entrepreneurship skills to create viable ventures for public good.


What if testing for Lyme disease were as simple as dropping a tick in a test tube at home, waiting a few minutes, and looking for a change of color?

MIT Sloan Fellow and physician Erin Dawicki is making it happen, as part of her aspiration to make Lyme testing accessible, affordable, and widespread. She noticed a troubling increase in undetected Lyme disease in her practice and collaborated with fellow MIT students to found Lyme Alert, a startup that has created the first truly at-home Lyme screening kit using nanotechnology.

Lyme Alert focuses on social impact in its mission to deliver faster diagnoses while using its technology to track disease spread. Participating in the 2024 IDEAS Social Innovation Challenge (IDEAS) helped the team refine their solution while keeping impact at the heart of their work. They ultimately won the top prize at the program’s award ceremony in the spring.

Over the past 23 years, IDEAS has fostered a community in which hundreds of entrepreneurial students have developed their innovation skills in collaboration with affected stakeholders, experienced entrepreneurs, and a supportive network of alumni, classmates, and mentors. The 14 teams in the 2024 IDEAS cohort join over 200 alumni teams — many still in operation today — that have received over $1.5 million in seed funding since 2001.

“IDEAS is a great example of experiential learning at MIT: empowering students to ask good questions, explore new frameworks, and propose sustainable interventions to urgent challenges alongside community partners," says Lauren Tyger, assistant dean of social innovation at the Priscilla King Gray Public Service Center (PKG Center) at MIT.

As MIT’s premier social impact incubator housed within the PKG Center, IDEAS prepares students to take their early-stage ideas to the next level. Teams learn how to develop relationships with constituents affected by social issues, propose interventions that yield measurable impact, and create effective social enterprise models.

“This program undoubtedly opened my eyes to the intersection of social impact and entrepreneurship, fields I previously thought to be mutually exclusive,” says Srihitha Dasari, a rising junior in brain and cognitive sciences and co-founder of another award-winning team, PuntoSalud. “It not only provided me with vital skills to advance my own interests in the startup ecosystem, but expanded my network in order to enact change.”

Shaping the “leaders of tomorrow”

Over the course of one semester, IDEAS teams participate in iterative workshops, refine their ideas with mentors, and pitch their solutions to peers and judges. The process helps students transform their concepts into social innovations in health care, finance, climate, education, and many more fields.

The program culminates in an awards ceremony at the MIT Museum, where teams share their final products. This year’s showcase featured a keynote address from Christine Ortiz, professor of materials science and engineering. Her passion for socially-directed science and technology aligns with IDEAS’ focus on social impact.

“I was grateful to be a part of the journey for these 14 teams,” Ortiz says. “IDEAS speaks to the core of what MIT needs: innovators capable of thinking critically about problems within their communities.”

Five teams are selected for awards of $6,000 to $20,000 by a group of experts across a variety of industries who volunteer as judges, and two additional award grants of $2,500 are given to teams that received the most votes through the MIT Solve initiative’s IDEAS virtual showcase.

The teams that received awards this year are: Lyme Alert, which created the first truly at-home tick testing kit for Lyme disease; My Sister’s Keeper, which aims to establish a professional leadership incubator designed specifically for Muslim immigrant women in the United States; Sakhi - Simppl, which created a WhatsApp chatbot that generates responses grounded in accurate, verified knowledge from international health agencies; BendShelters, which provides sustainable, modular, and easily deployable bamboo shelters for displaced populations in Myanmar, a Southeast Asian country under a dictatorship; PuntoSalud, an AI-powered virtual health messaging system that delivers personalized, trustworthy information sourced directly from local hospitals in Argentina; ONE Community, which provides a digital network through which businesses in India at risk of displacement can connect with more customers and partners to ensure sustained and resilient growth; and Mudzi Cooking Project, a social enterprise tackling the challenges faced by women in Chisinga, Malawi, who struggle to access firewood.

As a member of the Science Hub, the PKG Center worked with corporate partner Amazon, which sponsored the top five awards for the first time in 2024. The inaugural Amazon Prizes for Social Good honored the teams’ efforts to use tech to solve social issues.

“Clearly, these students are inspired to give rather than to take, and their work distinguishes them all as the leaders of tomorrow,” says Tye Brady, chief technologist at Amazon Robotics.

All of the teams will refine their ideas over the summer and report back by the start of the next academic year. Additionally, for a period of 16 months the teams that won awards will continue to receive guidance from the PKG Center and a founder support network with the 2023 group of IDEAS grantees.

Tapping MIT’s innovation ecosystem

IDEAS is just one of the PKG Center’s programs that provide opportunities for students to focus on social impact. In tandem with other Institute resources for student innovators, PKG enables students to apply their innovation skills to solve real-world problems while supporting community-informed solutions to systemic challenges.

“The PKG Center is a valued partner in enabling the growing numbers of students who aspire to create impact-focused ventures,” says Don Shobrys, director of MIT Venture Mentoring Service.

In order to make those ventures successful, Tyger explains, “IDEAS teaches students frameworks to deeply understand the systems around a challenge, get to know who’s already addressing it, find gaps, and then design and implement something that will uniquely and sustainably address the challenge. Rather than optimizing for profit alone, IDEAS helps students learn how to optimize for what can produce the most social good or reduce the most harm.”

Tyger notes that although IDEAS’ emphasis on social impact is somewhat unique, it is complemented by MIT’s rich entrepreneurship ecosystem. “There are many resources and people who are incredibly generous with their time — and who above all do it because they know we are all supporting the growth of students,” she says.

This year’s program partners included MIT Sandbox and Arts Startup Incubator, which co-hosted informational sessions for applicants in the fall; BU Law ClinicD-Lab, and Systems-Awareness Lab leaders, who served as guest speakers throughout the spring; Venture Mentoring Service, which matched teams with mentors; entrepreneurs-in-residence from the Martin Trust Center for MIT Entrepreneurship, who judged final pitches and advised teams; DesignX and the Center for Development and Entrepreneurship at MIT (formerly the Legatum Center), which provided additional support to several teams; MIT Solve, which hosted the teams on their voting platform; and MIT Innovation HQ, which provided space for students to meet one another and exchange ideas.

While IDEAS projects are designed to be a means of transformative change for public good, many students say that the program is transformative for them, as well. “Before IDEAS, I didn’t see myself as an innovator — just someone passionate about solving a problem that I’d heard people facing across diseases,” reflects Anika Wadhera, a rising senior in biological engineering and co-founder of Chronolog Health, a platform revolutionizing chronic illness management. “Now I feel much more confident in my ability to actually make a difference by better understanding the different stakeholders and the factors that are necessary to make a transformative solution.”


Researchers study differences in attitudes toward Covid-19 vaccines between women and men in Africa

While women and men self-reported similar vaccination rates, unvaccinated women had less intention to get vaccinated than men.


While many studies over the past several years have examined people’s access to and attitudes toward Covid-19 vaccines, few studies in sub-Saharan Africa have looked at whether there were differences in vaccination rates and intention between men and women. In a new study appearing in the journal Frontiers in Global Women’s Health, researchers found that while women and men self-reported similar Covid-19 vaccination rates in 2022, unvaccinated men expressed more intention to get vaccinated than unvaccinated women.

Women tend to have better health-seeking behaviors than men overall. However, most studies relating to Covid-19 vaccination have found that intention has been lower among women. “We wondered whether this would hold true at the uptake level,” says Rawlance Ndejjo, a leader of the new study and an assistant lecturer in the Department of Disease Control and Environmental Health at Makerere University.

The comparable vaccination rates between men and women in the study is “a good thing to see,” adds Lula Chen, research director at MIT Governance Lab (GOV/LAB) and a co-author of the new study. “There wasn’t anything gendered about how [the vaccine] was being advertised or who was actually getting access to it.”

Women’s lower intention to vaccinate seemed to be driven by concerns about vaccine safety, suggesting that providing factual information about vaccine safety from trusted sources, like the Ministry of Health, could increase uptake.

The work is a collaboration between scholars from the MIT GOV/LAB, Makerere University’s School of Public Health in Uganda, University of Kinshasa’s School of Public Health in the Democratic Republic of the Congo (DRC), University of Ibadan’s College of Medicine in Nigeria, and Cheikh Anta Diop University in Senegal. 

Studying vaccine availability and uptake in sub-Saharan Africa

The authors’ collaboration began in 2021 with research into Covid-19 vaccination rates, people’s willingness to get vaccinated, and how people’s trust in different authorities shaped attitudes toward vaccines in Uganda, the DRC, Senegal, and Nigeria. A survey in Uganda found that people who received information about Covid-19 from health workers were more likely to be vaccinated, stressing the important role people who work in the health-care system can play in vaccination efforts.

Work from other scientists has found that women were less likely to accept Covid-19 vaccines than men, and that in low- and middle-income countries, women also may be less likely to get vaccinated against Covid-19 and less likely to intend to get vaccinated, possibly due to factors including lower levels of education, work obligations, and domestic care obligations.

Previous studies in sub-Saharan Africa that focused on differences between men and women with intention and willingness to vaccinate were inconclusive, Ndejjo says. “You would hardly find actual studies on uptake of the vaccines,” he adds. For the new paper, the researchers aimed to dig into uptake.

People who trust the government and health officials were more likely to get vaccinated

The researchers relied on phone survey data collected from adults in the four countries between March and July 2022. The surveys asked people about whether they’d been vaccinated and whether those who were unvaccinated intended to get vaccinated, as well as their attitudes toward Covid-19, their trust in different authorities, demographic information, and more.

Overall, 48.5 percent of men said they had been vaccinated, compared to 47.9 percent of women. Trust in authorities seemed to play a role in people’s decision to vaccinate — receiving information from health workers about Covid-19 and higher trust in the Ministry of Health were both correlated with getting vaccinated for men, whereas higher trust in the government was correlated with vaccine uptake in women.

Lower interest in vaccines among women seemed related to safety concerns

A smaller percentage of unvaccinated women (54 percent) said they intended to get vaccinated, compared to 63.4 percent of men. More unvaccinated women said they had concerns about the vaccine’s safety than unvaccinated men, which could be driving their lower intention.

The researchers also found that unvaccinated women and men over 40 had similar levels of intention to get vaccinated — lower intention in women under 40 may have driven the difference between men and women. Younger women could have concerns about vaccines related to pregnancy, Chen says. If this is the case, the research suggests that officials need to provide additional reassurance to pregnant people about vaccine safety, she adds.

Trust in authorities also contributed to people’s intention to vaccinate. Trust in the Ministry of Health was tied to higher intention to vaccinate for both men and women. Men with more trust in the World Health Organization were also more likely to intend to vaccinate.

“There’s a need to deal with a lot of the myths and misconceptions that exist,” Ndejjo says, as well as ensure that people’s concerns related to vaccine safety and effectiveness are addressed. Officials need “to work with trusted sources of information to bridge some of the gaps that we observe,” he adds. People need to be supported in their decision-making so they can make the best decisions for their health.

“This research highlights linkages between citizen trust in government, their willingness to get vaccines, and, importantly, the differences between men and women on this issue — differences that policymakers will need to understand in order to design more targeted, gender-specific public health interventions,” says study co-author Lily L. Tsai, who is MIT GOV/LAB’s director and founder and the Ford Professor of Political Science at MIT.

This project was funded by the Bill & Melinda Gates Foundation.


A new way to miniaturize cell production for cancer treatment

A chip the size of a pack of cards uses fewer resources and a smaller footprint than existing automated manufacturing platforms and could lead to more affordable cell therapy manufacturing.


Researchers from the Singapore-MIT Alliance for Research and Technology (SMART), MIT’s research enterprise in Singapore, have developed a novel way to produce clinical doses of viable autologous chimeric antigen receptor (CAR) T-cells in a ultra-small automated closed-system microfluidic chip, roughly the size of a pack of cards. 

This is the first time that a microbioreactor is used to produce autologous cell therapy products. Specifically, the new method was successfully used to manufacture and expand CAR-T cells that are as effective as cells produced using existing systems in a smaller footprint and less space, and using fewer seeding cell numbers and cell manufacturing reagents. This could lead to more efficient and affordable methods of scaling-out autologous cell therapy manufacturing, and could even potentially enable point-of-care manufacturing of CAR T-cells outside of a laboratory setting — such as in hospitals and wards.

CAR T-cell therapy manufacturing requires the isolation, activation, genetic modification, and expansion of a patient’s own T-cells to kill tumor cells upon reinfusion into the patient. Despite how cell therapies have revolutionized cancer immunotherapy, with some of the first patients who received autologous cell therapies in remission for more than 10 years, the manufacturing process for CAR-T cells has remained inconsistent, costly, and time-consuming. It can be prone to contamination, subject to human error, and requires seeding cell numbers that are impractical for smaller-scale CAR T-cell production. These challenges create bottlenecks that restrict both the availability and affordability of these therapies despite their effectiveness.

In a paper titled “A high-density microbioreactor process designed for automated point-of-care manufacturing of CAR T cells” published in the journal Nature Biomedical Engineering, SMART researchers detailed their breakthrough: Human primary T-cells can be activated, transduced, and expanded to high densities in a 2-mililiter automated closed-system microfluidic chip to produce over 60 million CAR T-cells from donors with lymphoma, and over 200 million CAR T-cells from healthy donors. The CAR T-cells produced using the microbioreactor are as effective as those produced using conventional methods, but in a smaller footprint and less space, and with fewer resources. This translates to lower cost of goods manufactured (COGM), and potentially to lower costs for patients.

The groundbreaking research was led by members of the Critical Analytics for Manufacturing Personalized-Medicine (CAMP) interdisciplinary research group at SMART. Collaborators include researchers from the Duke-NUS Medical School; the Institute of Molecular and Cell Biology at the Agency for Science, Technology and Research; KK Women’s and Children’s Hospital; and Singapore General Hospital.

“This advancement in cell therapy manufacturing could ultimately offer a point-of-care platform that could substantially increase the number of CAR T-cell production slots, reducing the wait times and cost of goods of these living medicines — making cell therapy more accessible to the masses. The use of scaled-down bioreactors could also aid process optimization studies, including for different cell therapy products,” says Michael Birnbaum, co-lead principal investigator at SMART CAMP, associate professor of biological engineering at MIT, and a co-senior author of the paper.

With high T-cell expansion rates, similar total T-cell numbers could be attained with a shorter culture period in the microbioreactor (seven to eight days) compared to gas-permeable culture plates (12 days), potentially shortening production times by 30-40 percent. The CAR T-cells from both the microfluidic bioreactor and gas-permeable culture plates only showed subtle differences in cell quality. The cells were equally functional in killing leukemia cells when tested in mice.

“This new method suggests that a dramatic miniaturization of current-generation autologous cell therapy production is feasible, with the potential of significantly alleviating manufacturing limitations of CAR T-cell therapy. Such a miniaturization would lay the foundation for point-of-care manufacturing of CAR T-cells and decrease the “good manufacturing practice” (GMP) footprint required for producing cell therapies — which is one of the primary drivers of COGM,” says Wei-Xiang Sin, research scientist at SMART CAMP and first author of the paper.

Notably, the microbioreactor used in the research is a perfusion-based, automated, closed system with the smallest footprint per dose, smallest culture volume and seeding cell number, as well as the highest cell density and level of process control attainable. These microbioreactors — previously only used for microbial and mammalian cell cultures — were originally developed at MIT and have been advanced to commercial production by Millipore Sigma.

The small starting cell numbers required, compared to existing larger automated manufacturing platforms, means that smaller amounts of isolation beads, activation reagents, and lentiviral vectors are required per production run. In addition, smaller volumes of medium are required (at least tenfold lower than larger automated culture systems) owing to the extremely small culture volume (2 milliliters; approximately 100-fold lower than larger automated culture systems) — which contributes to significant reductions in reagent cost. This could benefit patients, especially pediatric patients who have low or insufficient T-cell numbers to produce therapeutic doses of CAR T-cells.

Moving forward, SMART CAMP is working on further engineering sampling and/or analytical systems around the microbioreactor so that CAR-T production can be performed with reduced labor and out of a laboratory setting, potentially facilitating the decentralized bedside manufacturing of CAR T-cells. SMART CAMP is also looking to further optimize the process parameters and culture conditions to improve cell yield and quality for future clinical use.

The research was conducted by SMART and supported by the National Research Foundation Singapore under its Campus for Research Excellence and Technological Enterprise (CREATE) program.


“They can see themselves shaping the world they live in”

Developed by MIT RAISE, the Day of AI curriculum empowers K-12 students to collaborate on local and global challenges using AI.


During the journey from the suburbs to the city, the tree canopy often dwindles down as skyscrapers rise up. A group of New England Innovation Academy students wondered why that is.

“Our friend Victoria noticed that where we live in Marlborough there are lots of trees in our own backyards. But if you drive just 30 minutes to Boston, there are almost no trees,” said high school junior Ileana Fournier. “We were struck by that duality.”

This inspired Fournier and her classmates Victoria Leeth and Jessie Magenyi to prototype a mobile app that illustrates Massachusetts deforestation trends for Day of AI, a free, hands-on curriculum developed by the MIT Responsible AI for Social Empowerment and Education (RAISE) initiative, headquartered in the MIT Media Lab and in collaboration with the MIT Schwarzman College of Computing and MIT Open Learning. They were among a group of 20 students from New England Innovation Academy who shared their projects during the 2024 Day of AI global celebration hosted with the Museum of Science.

The Day of AI curriculum introduces K-12 students to artificial intelligence. Now in its third year, Day of AI enables students to improve their communities and collaborate on larger global challenges using AI. Fournier, Leeth, and Magenyi’s TreeSavers app falls under the Telling Climate Stories with Data module, one of four new climate-change-focused lessons.

“We want you to be able to express yourselves creatively to use AI to solve problems with critical-thinking skills,” Cynthia Breazeal, director of MIT RAISE, dean for digital learning at MIT Open Learning, and professor of media arts and sciences, said during this year’s Day of AI global celebration at the Museum of Science. “We want you to have an ethical and responsible way to think about this really powerful, cool, and exciting technology.”

Moving from understanding to action

Day of AI invites students to examine the intersection of AI and various disciplines, such as history, civics, computer science, math, and climate change. With the curriculum available year-round, more than 10,000 educators across 114 countries have brought Day of AI activities to their classrooms and homes.

The curriculum gives students the agency to evaluate local issues and invent meaningful solutions. “We’re thinking about how to create tools that will allow kids to have direct access to data and have a personal connection that intersects with their lived experiences,” Robert Parks, curriculum developer at MIT RAISE, said at the Day of AI global celebration.

Before this year, first-year Jeremie Kwapong said he knew very little about AI. “I was very intrigued,” he said. “I started to experiment with ChatGPT to see how it reacts. How close can I get this to human emotion? What is AI’s knowledge compared to a human’s knowledge?”

In addition to helping students spark an interest in AI literacy, teachers around the world have told MIT RAISE that they want to use data science lessons to engage students in conversations about climate change. Therefore, Day of AI’s new hands-on projects use weather and climate change to show students why it’s important to develop a critical understanding of dataset design and collection when observing the world around them.

“There is a lag between cause and effect in everyday lives,” said Parks. “Our goal is to demystify that, and allow kids to access data so they can see a long view of things.”

Tools like MIT App Inventor — which allows anyone to create a mobile application — help students make sense of what they can learn from data. Fournier, Leeth, and Magenyi programmed TreeSavers in App Inventor to chart regional deforestation rates across Massachusetts, identify ongoing trends through statistical models, and predict environmental impact. The students put that “long view” of climate change into practice when developing TreeSavers’ interactive maps. Users can toggle between Massachusetts’s current tree cover, historical data, and future high-risk areas.

Although AI provides fast answers, it doesn’t necessarily offer equitable solutions, said David Sittenfeld, director of the Center for the Environment at the Museum of Science. The Day of AI curriculum asks students to make decisions on sourcing data, ensuring unbiased data, and thinking responsibly about how findings could be used.

“There’s an ethical concern about tracking people’s data,” said Ethan Jorda, a New England Innovation Academy student. His group used open-source data to program an app that helps users track and reduce their carbon footprint.

Christine Cunningham, senior vice president of STEM Learning at the Museum of Science, believes students are prepared to use AI responsibly to make the world a better place. “They can see themselves shaping the world they live in,” said Cunningham. “Moving through from understanding to action, kids will never look at a bridge or a piece of plastic lying on the ground in the same way again.”

Deepening collaboration on earth and beyond

The 2024 Day of AI speakers emphasized collaborative problem solving at the local, national, and global levels.

“Through different ideas and different perspectives, we’re going to get better solutions,” said Cunningham. “How do we start young enough that every child has a chance to both understand the world around them but also to move toward shaping the future?”

Presenters from MIT, the Museum of Science, and NASA approached this question with a common goal — expanding STEM education to learners of all ages and backgrounds.

“We have been delighted to collaborate with the MIT RAISE team to bring this year’s Day of AI celebration to the Museum of Science,” says Meg Rosenburg, manager of operations at the Museum of Science Centers for Public Science Learning. “This opportunity to highlight the new climate modules for the curriculum not only perfectly aligns with the museum’s goals to focus on climate and active hope throughout our Year of the Earthshot initiative, but it has also allowed us to bring our teams together and grow a relationship that we are very excited to build upon in the future.”

Rachel Connolly, systems integration and analysis lead for NASA's Science Activation Program, showed the power of collaboration with the example of how human comprehension of Saturn’s appearance has evolved. From Galileo’s early telescope to the Cassini space probe, modern imaging of Saturn represents 400 years of science, technology, and math working together to further knowledge.

“Technologies, and the engineers who built them, advance the questions we’re able to ask and therefore what we’re able to understand,” said Connolly, research scientist at MIT Media Lab.

New England Innovation Academy students saw an opportunity for collaboration a little closer to home. Emmett Buck-Thompson, Jeff Cheng, and Max Hunt envisioned a social media app to connect volunteers with local charities. Their project was inspired by Buck-Thompson’s father’s difficulties finding volunteering opportunities, Hunt’s role as the president of the school’s Community Impact Club, and Cheng’s aspiration to reduce screen time for social media users. Using MIT App Inventor, ​their combined ideas led to a prototype with the potential to make a real-world impact in their community.

The Day of AI curriculum teaches the mechanics of AI, ethical considerations and responsible uses, and interdisciplinary applications for different fields. It also empowers students to become creative problem solvers and engaged citizens in their communities and online. From supporting volunteer efforts to encouraging action for the state’s forests to tackling the global challenge of climate change, today’s students are becoming tomorrow’s leaders with Day of AI.

“We want to empower you to know that this is a tool you can use to make your community better, to help people around you with this technology,” said Breazeal.

Other Day of AI speakers included Tim Ritchie, president of the Museum of Science; Michael Lawrence Evans, program director of the Boston Mayor’s Office of New Urban Mechanics; Dava Newman, director of the MIT Media Lab; and Natalie Lao, executive director of the App Inventor Foundation.


A new strategy to cope with emotional stress

A study by MIT scientists supports “social good” as a cognitive approach to dealing with highly stressful events.


Some people, especially those in public service, perform admirable feats: Think of health-care workers fighting to keep patients alive or first responders arriving at the scene of a car crash. But the emotional weight can become a mental burden. Research has shown that emergency personnel are at elevated risk for mental health challenges like post-traumatic stress disorder. How can people undergo such stressful experiences and also maintain their well-being?

A new study from the McGovern Institute for Brain Research at MIT revealed that a cognitive strategy focused on social good may be effective in helping people cope with distressing events. The research team found that the approach was comparable to another well-established emotion regulation strategy, unlocking a new tool for dealing with highly adverse situations.

“How you think can improve how you feel,” says John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology and a professor of brain and cognitive sciences at MIT, who is a senior author of the paper. “This research suggests that the social good approach might be particularly useful in improving well-being for those constantly exposed to emotionally taxing events.”

The study, published today in PLOS ONE, is the first to examine the efficacy of this cognitive strategy. Nancy Tsai, a postdoc in Gabrieli’s lab at the McGovern Institute, is the lead author of the paper.

Emotion regulation tools

Emotion regulation is the ability to mentally reframe how we experience emotions — a skill critical to maintaining good mental health. Doing so can make one feel better when dealing with adverse events, and emotion regulation has been shown to boost emotional, social, cognitive, and physiological outcomes across the lifespan.

One emotion regulation strategy is “distancing,” where a person copes with a negative event by imagining it as happening far away, a long time ago, or from a third-person perspective. Distancing has been well-documented as a useful cognitive tool, but it may be less effective in certain situations, especially ones that are socially charged — like a firefighter rescuing a family from a burning home. Rather than distancing themselves, a person may instead be forced to engage directly with the situation.

“In these cases, the ‘social good’ approach may be a powerful alternative,” says Tsai. “When a person uses the social good method, they view a negative situation as an opportunity to help others or prevent further harm.” For example, a firefighter experiencing emotional distress might focus on the fact that their work enables them to save lives. The idea had yet to be backed by scientific investigation, so Tsai and her team, alongside Gabrieli, saw an opportunity to rigorously probe this strategy.

A novel study

The MIT researchers recruited a cohort of adults and had them complete a questionnaire to gather information including demographics, personality traits, and current well-being, as well as how they regulated their emotions and dealt with stress. The cohort was randomly split into two groups: a distancing group and a social good group. In the online study, each group was shown a series of images that were either neutral (such as fruit) or contained highly aversive content (such as bodily injury). Participants were fully informed of the kinds of images they might see and could opt out of the study at any time.

Each group was asked to use their assigned cognitive strategy to respond to half of the negative images. For example, while looking at a distressing image, a person in the distancing group could have imagined that it was a screenshot from a movie. Conversely, a subject in the social good group might have responded to the image by envisioning that they were a first responder saving people from harm. For the other half of the negative images, participants were asked to only look at them and pay close attention to their emotions. The researchers asked the participants how they felt after each image was shown.

Social good as a potent strategy

The MIT team found that distancing and social good approaches helped diminish negative emotions. Participants reported feeling better when they used these strategies after viewing adverse content compared to when they did not, and stated that both strategies were easy to implement.

The results also revealed that, overall, distancing yielded a stronger effect. Importantly, however, Tsai and Gabrieli believe that this study offers compelling evidence for social good as a powerful method better-suited to situations when people cannot distance themselves, like rescuing someone from a car crash, “Which is more probable for people in the real world,” notes Tsai. Moreover, the team discovered that people who most successfully used the social good approach were more likely to view stress as enhancing rather than debilitating. Tsai says this link may point to psychological mechanisms that underlie both emotion regulation and how people respond to stress.

Additionally, the results showed that older adults used the cognitive strategies more effectively than younger adults. The team suspects that this is probably because, as prior research has shown, older adults are more adept at regulating their emotions, likely due to having greater life experiences. The authors note that successful emotion regulation also requires cognitive flexibility, or having a malleable mindset to adapt well to different situations.

“This is not to say that people, such as physicians, should reframe their emotions to the point where they fully detach themselves from negative situations,” says Gabrieli. “But our study shows that the social good approach may be a potent strategy to combat the immense emotional demands of certain professions.”

The MIT team says that future studies are needed to further validate this work, and that such research is promising in that it can uncover new cognitive tools to equip individuals to take care of themselves as they bravely assume the challenge of taking care of others.


Study: Weaker ocean circulation could enhance CO2 buildup in the atmosphere

New findings challenge current thinking on the ocean’s role in storing carbon.


As climate change advances, the ocean’s overturning circulation is predicted to weaken substantially. With such a slowdown, scientists estimate the ocean will pull down less carbon dioxide from the atmosphere. However, a slower circulation should also dredge up less carbon from the deep ocean that would otherwise be released back into the atmosphere. On balance, the ocean should maintain its role in reducing carbon emissions from the atmosphere, if at a slower pace.

However, a new study by an MIT researcher finds that scientists may have to rethink the relationship between the ocean’s circulation and its long-term capacity to store carbon. As the ocean gets weaker, it could release more carbon from the deep ocean into the atmosphere instead.

The reason has to do with a previously uncharacterized feedback between the ocean’s available iron, upwelling carbon and nutrients, surface microorganisms, and a little-known class of molecules known generally as “ligands.” When the ocean circulates more slowly, all these players interact in a self-perpetuating cycle that ultimately increases the amount of carbon that the ocean outgases back to the atmosphere.

“By isolating the impact of this feedback, we see a fundamentally different relationship between ocean circulation and atmospheric carbon levels, with implications for the climate,” says study author Jonathan Lauderdale, a research scientist in MIT’s Department of Earth, Atmospheric, and Planetary Sciences. “What we thought is going on in the ocean is completely overturned.”

Lauderdale says the findings show that “we can’t count on the ocean to store carbon in the deep ocean in response to future changes in circulation. We must be proactive in cutting emissions now, rather than relying on these natural processes to buy us time to mitigate climate change.”

His study appears today in the journal Nature Communications.

Box flow

In 2020, Lauderdale led a study that explored ocean nutrients, marine organisms, and iron, and how their interactions influence the growth of phytoplankton around the world. Phytoplankton are microscopic, plant-like organisms that live on the ocean surface and consume a diet of carbon and nutrients that upwell from the deep ocean and iron that drifts in from desert dust.

The more phytoplankton that can grow, the more carbon dioxide they can absorb from the atmosphere via photosynthesis, and this plays a large role in the ocean’s ability to sequester carbon.

For the 2020 study, the team developed a simple “box” model, representing conditions in different parts of the ocean as general boxes, each with a different balance of nutrients, iron, and ligands — organic molecules that are thought to be byproducts of phytoplankton. The team modeled a general flow between the boxes to represent the ocean’s larger circulation — the way seawater sinks, then is buoyed back up to the surface in different parts of the world.

This modeling revealed that, even if scientists were to “seed” the oceans with extra iron, that iron wouldn’t have much of an effect on global phytoplankton growth. The reason was due to a limit set by ligands. It turns out that, if left on its own, iron is insoluble in the ocean and therefore unavailable to phytoplankton. Iron only becomes soluble at “useful” levels when linked with ligands, which keep iron in a form that plankton can consume. Lauderdale found that adding iron to one ocean region to consume additional nutrients robs other regions of nutrients that phytoplankton there need to grow. This lowers the production of ligands and the supply of iron back to the original ocean region, limiting the amount of extra carbon that would be taken up from the atmosphere.

Unexpected switch

Once the team published their study, Lauderdale worked the box model into a form that he could make publicly accessible, including ocean and atmosphere carbon exchange and extending the boxes to represent more diverse environments, such as conditions similar to the Pacific, the North Atlantic, and the Southern Ocean. In the process, he tested other interactions within the model, including the effect of varying ocean circulation.

He ran the model with different circulation strengths, expecting to see less atmospheric carbon dioxide with weaker ocean overturning — a relationship that previous studies have supported, dating back to the 1980s. But what he found instead was a clear and opposite trend: The weaker the ocean’s circulation, the more CO2 built up in the atmosphere.

“I thought there was some mistake,” Lauderdale recalls. “Why were atmospheric carbon levels trending the wrong way?”

When he checked the model, he found that the parameter describing ocean ligands had been left “on” as a variable. In other words, the model was calculating ligand concentrations as changing from one ocean region to another.

On a hunch, Lauderdale turned this parameter “off,” which set ligand concentrations as constant in every modeled ocean environment, an assumption that many ocean models typically make. That one change reversed the trend, back to the assumed relationship: A weaker circulation led to reduced atmospheric carbon dioxide. But which trend was closer to the truth?

Lauderdale looked to the scant available data on ocean ligands to see whether their concentrations were more constant or variable in the actual ocean. He found confirmation in GEOTRACES, an international study that coordinates measurements of trace elements and isotopes across the world’s oceans, that scientists can use to compare concentrations from region to region. Indeed, the molecules’ concentrations varied. If ligand concentrations do change from one region to another, then his surprise new result was likely representative of the real ocean: A weaker circulation leads to more carbon dioxide in the atmosphere.

“It’s this one weird trick that changed everything,” Lauderdale says. “The ligand switch has revealed this completely different relationship between ocean circulation and atmospheric CO2 that we thought we understood pretty well.”

Slow cycle

To see what might explain the overturned trend, Lauderdale analyzed biological activity and carbon, nutrient, iron, and ligand concentrations from the ocean model under different circulation strengths, comparing scenarios where ligands were variable or constant across the various boxes.

This revealed a new feedback: The weaker the ocean’s circulation, the less carbon and nutrients the ocean pulls up from the deep. Any phytoplankton at the surface would then have fewer resources to grow and would produce fewer byproducts (including ligands) as a result. With fewer ligands available, less iron at the surface would be usable, further reducing the phytoplankton population. There would then be fewer phytoplankton available to absorb carbon dioxide from the atmosphere and consume upwelled carbon from the deep ocean.

“My work shows that we need to look more carefully at how ocean biology can affect the climate,” Lauderdale points out. “Some climate models predict a 30 percent slowdown in the ocean circulation due to melting ice sheets, particularly around Antarctica. This huge slowdown in overturning circulation could actually be a big problem: In addition to a host of other climate issues, not only would the ocean take up less anthropogenic CO2 from the atmosphere, but that could be amplified by a net outgassing of deep ocean carbon, leading to an unanticipated increase in atmospheric CO2 and unexpected further climate warming.” 


MIT researchers introduce generative AI for databases

This new tool offers an easier way for people to analyze complex tabular data.


A new tool makes it easier for database users to perform complicated statistical analyses of tabular data without the need to know what is going on behind the scenes.

GenSQL, a generative AI system for databases, could help users make predictions, detect anomalies, guess missing values, fix errors, or generate synthetic data with just a few keystrokes.

For instance, if the system were used to analyze medical data from a patient who has always had high blood pressure, it could catch a blood pressure reading that is low for that particular patient but would otherwise be in the normal range.

GenSQL automatically integrates a tabular dataset and a generative probabilistic AI model, which can account for uncertainty and adjust their decision-making based on new data.

Moreover, GenSQL can be used to produce and analyze synthetic data that mimic the real data in a database. This could be especially useful in situations where sensitive data cannot be shared, such as patient health records, or when real data are sparse.

This new tool is built on top of SQL, a programming language for database creation and manipulation that was introduced in the late 1970s and is used by millions of developers worldwide.

“Historically, SQL taught the business world what a computer could do. They didn’t have to write custom programs, they just had to ask questions of a database in high-level language. We think that, when we move from just querying data to asking questions of models and data, we are going to need an analogous language that teaches people the coherent questions you can ask a computer that has a probabilistic model of the data,” says Vikash Mansinghka ’05, MEng ’09, PhD ’09, senior author of a paper introducing GenSQL and a principal research scientist and leader of the Probabilistic Computing Project in the MIT Department of Brain and Cognitive Sciences.

When the researchers compared GenSQL to popular, AI-based approaches for data analysis, they found that it was not only faster but also produced more accurate results. Importantly, the probabilistic models used by GenSQL are explainable, so users can read and edit them.

“Looking at the data and trying to find some meaningful patterns by just using some simple statistical rules might miss important interactions. You really want to capture the correlations and the dependencies of the variables, which can be quite complicated, in a model. With GenSQL, we want to enable a large set of users to query their data and their model without having to know all the details,” adds lead author Mathieu Huot, a research scientist in the Department of Brain and Cognitive Sciences and member of the Probabilistic Computing Project.

They are joined on the paper by Matin Ghavami and Alexander Lew, MIT graduate students; Cameron Freer, a research scientist; Ulrich Schaechtle and Zane Shelby of Digital Garage; Martin Rinard, an MIT professor in the Department of Electrical Engineering and Computer Science and member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and Feras Saad ’15, MEng ’16, PhD ’22, an assistant professor at Carnegie Mellon University. The research was recently presented at the ACM Conference on Programming Language Design and Implementation.

Combining models and databases

SQL, which stands for structured query language, is a programming language for storing and manipulating information in a database. In SQL, people can ask questions about data using keywords, such as by summing, filtering, or grouping database records.

However, querying a model can provide deeper insights, since models can capture what data imply for an individual. For instance, a female developer who wonders if she is underpaid is likely more interested in what salary data mean for her individually than in trends from database records.

The researchers noticed that SQL didn’t provide an effective way to incorporate probabilistic AI models, but at the same time, approaches that use probabilistic models to make inferences didn’t support complex database queries.

They built GenSQL to fill this gap, enabling someone to query both a dataset and a probabilistic model using a straightforward yet powerful formal programming language.

A GenSQL user uploads their data and probabilistic model, which the system automatically integrates. Then, she can run queries on data that also get input from the probabilistic model running behind the scenes. This not only enables more complex queries but can also provide more accurate answers.

For instance, a query in GenSQL might be something like, “How likely is it that a developer from Seattle knows the programming language Rust?” Just looking at a correlation between columns in a database might miss subtle dependencies. Incorporating a probabilistic model can capture more complex interactions.   

Plus, the probabilistic models GenSQL utilizes are auditable, so people can see which data the model uses for decision-making. In addition, these models provide measures of calibrated uncertainty along with each answer.

For instance, with this calibrated uncertainty, if one queries the model for predicted outcomes of different cancer treatments for a patient from a minority group that is underrepresented in the dataset, GenSQL would tell the user that it is uncertain, and how uncertain it is, rather than overconfidently advocating for the wrong treatment.

Faster and more accurate results

To evaluate GenSQL, the researchers compared their system to popular baseline methods that use neural networks. GenSQL was between 1.7 and 6.8 times faster than these approaches, executing most queries in a few milliseconds while providing more accurate results.

They also applied GenSQL in two case studies: one in which the system identified mislabeled clinical trial data and the other in which it generated accurate synthetic data that captured complex relationships in genomics.

Next, the researchers want to apply GenSQL more broadly to conduct largescale modeling of human populations. With GenSQL, they can generate synthetic data to draw inferences about things like health and salary while controlling what information is used in the analysis.

They also want to make GenSQL easier to use and more powerful by adding new optimizations and automation to the system. In the long run, the researchers want to enable users to make natural language queries in GenSQL. Their goal is to eventually develop a ChatGPT-like AI expert one could talk to about any database, which grounds its answers using GenSQL queries.   

This research is funded, in part, by the Defense Advanced Research Projects Agency (DARPA), Google, and the Siegel Family Foundation.


MIT engineers find a way to protect microbes from extreme conditions

By helping microbes withstand industrial processing, the method could make it easier to harness the benefits of microorganisms used as medicines and in agriculture.


Microbes that are used for health, agricultural, or other applications need to be able to withstand extreme conditions, and ideally the manufacturing processes used to make tablets for long-term storage. MIT researchers have now developed a new way to make microbes hardy enough to withstand these extreme conditions.

Their method involves mixing bacteria with food and drug additives from a list of compounds that the FDA classifies as “generally regarded as safe.” The researchers identified formulations that help to stabilize several different types of microbes, including yeast and bacteria, and they showed that these formulations could withstand high temperatures, radiation, and industrial processing that can damage unprotected microbes.

In an even more extreme test, some of the microbes recently returned from a trip to the International Space Station, coordinated by Space Center Houston Manager of Science and Research Phyllis Friello, and the researchers are now analyzing how well the microbes were able to withstand those conditions.

“What this project was about is stabilizing organisms for extreme conditions. We're thinking about a broad set of applications, whether it's missions to space, human applications, or agricultural uses,” says Giovanni Traverso, an associate professor of mechanical engineering at MIT, a gastroenterologist at Brigham and Women’s Hospital, and the senior author of the study.

Miguel Jimenez, a former MIT research scientist who is now an assistant professor of biomedical engineering at Boston University, is the lead author of the paper, which appears today in Nature Materials.

Surviving extreme conditions

About six years ago, with funding from NASA’s Translational Research Institute for Space Health (TRISH), Traverso’s lab began working on new approaches to make helpful bacteria such as probiotics and microbial therapeutics more resilient. As a starting point, the researchers analyzed 13 commercially available probiotics and found that six of these products did not contain as many live bacteria as the label indicated.

“What we found was that, perhaps not surprisingly, there is a difference, and it can be significant,” Traverso says. “So then the next question was, given this, what can we do to help the situation?”

For their experiments, the researchers chose four different microbes to focus on: three bacteria and one yeast. These microbes are Escherichia coli Nissle 1917, a probiotic; Ensifer meliloti, a bacterium that can fix nitrogen in soil to support plant growth; Lactobacillus plantarum, a bacterium used to ferment food products; and the yeast Saccharomyces boulardii, which is also used as a probiotic.

When microbes are used for medical or agricultural applications, they are usually dried into a powder through a process called lyophilization. However, they can not normally be made into more useful forms such as a tablet or pill because this process requires exposure to an organic solvent, which can be toxic to the bacteria. The MIT team set out to find additives that could improve the microbes’ ability to survive this kind of processing.

“We developed a workflow where we can take materials from the ‘generally regarded as safe’ materials list from the FDA, and mix and match those with bacteria and ask, are there ingredients that enhance the stability of the bacteria during the lyophilization process?” Traverso says.

Their setup allows them to mix microbes with one of about 100 different ingredients and then grow them to see which survive the best when stored at room temperature for 30 days. These experiments revealed different ingredients, mostly sugars and peptides, that worked best for each species of microbe.

The researchers then picked one of the microbes, E. coli Nissle 1917, for further optimization. This probiotic has been used to treat “traveler’s diarrhea,” a condition caused by drinking water contaminated with harmful bacteria. The researchers found that if they combined caffeine or yeast extract with a sugar called melibiose, they could create a very stable formulation of E. coli Nissle 1917. This mixture, which the researchers called formulation D, allowed survival rates greater than 10 percent after the microbes were stored for six months at 37 degrees Celsius, while a commercially available formulation of E. coli Nissle 1917 lost all viability after only 11 days under those conditions.

Formulation D was also able to withstand much higher levels of ionizing radiation, up to 1,000 grays. (The typical radiation dose on Earth is about 15 micrograys per day, and in space, it’s about 200 micrograys per day.)

The researchers don’t know exactly how their formulations protect bacteria, but they hypothesize that the additives may help to stabilize the bacterial cell membranes during rehydration.

Stress tests

The researchers then showed that these microbes can not only survive harsh conditions, they also maintain their function after these exposures. After Ensifer meliloti were exposed to temperatures up to 50 degrees Celsius, the researchers found that they were still able to form symbiotic nodules on plant roots and convert nitrogen to ammonia.

They also found that their formulation of E. coli Nissle 1917 was able to inhibit the growth of Shigellaflexneri, one of the leading causes of diarrhea-associated deaths in low- and middle-income countries, when the microbes were grown together in a lab dish.

Last year, several strains of these extremophile microbes were sent to the International Space Station, which Jimenez describes as “the ultimate stress test.”

“Even just the shipping on Earth to the preflight validation, and storage until flight are part of this test, with no temperature control along the way,” he says.

The samples recently returned to Earth, and Jimenez’ lab is now analyzing them. He plans to compare samples that were kept inside the ISS to others that were bolted to the outside of the station, as well as control samples that remained on Earth.

“This work offers a promising approach to enhance the stability of probiotics and/or genetically engineered  microbes in extreme environments, such as in outer space, which could be used in future space missions to help maintain astronaut health or promote sustainability, such as in promoting more robust and resilient plants for food production,” says Camilla Urbaniak, a research scientist at NASA’s Jet Propulsion Laboratory, who was not involved in the study.

The research was funded by NASA’s Translational Research Institute for Space Health, Space Center Houston, MIT’s Department of Mechanical Engineering, and by 711 Human Performance Wing and the Defense Advanced Research Projects Agency.

Other authors of the paper include Johanna L’Heureux, Emily Kolaya, Gary Liu, Kyle Martin, Husna Ellis, Alfred Dao, Margaret Yang, Zachary Villaverde, Afeefah Khazi-Syed, Qinhao Cao, Niora Fabian, Joshua Jenkins, Nina Fitzgerald, Christina Karavasili, Benjamin Muller, and James Byrne.


Studying astrophysically relevant plasma physics

Thomas Varnish has always loved a hands-on approach to science. Research in lab-based astrophysics has enabled the PhD student to experiment in a heavily theoretical subject.


Thomas Varnish loves his hobbies — knitting, baking, pottery — it’s a long list. His latest interest is analog film photography. A picture with his mother and another with his boyfriend are just a few of Varnish’s favorites. “These moments of human connection are the ones I like,” he says.

Varnish’s love of capturing a fleeting moment on film translates to his research when he conducts laser interferometry on plasmas using off-the-shelf cameras. At the Department of Nuclear Science and Engineering, the third-year doctoral student studies various facets of astrophysically relevant fundamental plasma physics under the supervision of Professor Jack Hare.

It’s an area of research that Varnish arrived at organically.

A childhood fueled by science

Growing up in Warwickshire, England, Varnish fell in love with lab experiments as a middle-schooler after joining the science club. He remembers graduating from the classic egg-drop experiment to tracking the trajectory of a catapult, and eventually building his own model electromagnetic launch system. It was a set of electromagnets and sensors spaced along a straight track that could accelerate magnets and shoot them out the end. Varnish demonstrated the system by using it to pop balloons. Later, in high school, being a part of the robotics club team got him building a team of robots to compete in RoboCup, an international robot soccer competition. Varnish also joined the astronomy club, which helped seed an interest in the adjacent field of astrophysics.

Varnish moved on to Imperial College London to study physics as an undergraduate but he was still shopping around for definitive research interests. Always a hands-on science student, Varnish decided to give astronomy instrumentation a whirl during a summer school session in Canada.

However, even this discipline didn’t quite seem to stick until he came upon a lab at Imperial conducting research in experimental astrophysics. Called MAGPIE (The Mega Ampere Generator for Plasma Implosion Experiments), the facility merged two of Varnish’s greatest loves: hands-on experiments and astrophysics. Varnish eventually completed an undergraduate research opportunity (UROP) project at MAGPIE under the guidance of Hare, his current advisor, who was then a postdoc at the MAGPIE lab at Imperial College.

Part of Varnish’s research for his master’s degree at Imperial involved stitching together observations from the retired Herschel Space Telescope to create the deepest far-infrared image ever made by the instrument. The research also used statistical techniques to understand the patterns of brightness distribution in the images and to trace them to specific combinations of galaxy occurrences. By studying patterns in the brightness of a patch of dark sky, Varnish could discern the population of galaxies in the region.

Move to MIT

Varnish followed Hare (and a dream of studying astrophysics) to MIT, where he primarily focuses on plasma in the context of astrophysical environments. He studies experimental pulsed-power-driven magnetic reconnection in the presence of a guide field.

Key to Varnish’s experiments is a pulsed-power facility, which is essentially a large capacitor capable of releasing a significant surge of current. The electricity passes through (and vaporizes) thin wires in a vacuum chamber to create a plasma. At MIT, the facility currently being built at the Plasma Science and Fusion Center (PSFC) by Hare’s group is called: PUFFIN (PUlser For Fundamental (Plasma Physics) INvestigations).

In a pulsed-power facility, tiny cylindrical arrays of extremely thin metal wires usually generate the plasma. Varnish’s experiments use an array in which graphite leads, the kind used in mechanical pencils, replace the wires. “Doing so gives us the right kind of plasma with the right kind of properties we’d like to study,” Varnish says. The solution is also easy to work with and “not as fiddly as some other methods.” A thicker post in the middle completes the array. A pulsed current traveling down the array vaporizes the thin wires into a plasma. The interactions between the current flowing through the plasma and the generated magnetic field pushes the plasma radially outward. “Each little array is like a little exploding bubble of magnetized plasma,” Varnish says. He studies the interaction between the plasma flows at the center of two adjacent arrays.

Studying plasma behavior

The plasma generated in these pulsed-power experiments is stable only for a few hundred nanoseconds, so diagnostics have to take advantage of an extremely short sampling window. Laser interferometry, which images plasma density, is Varnish’s favorite. In this technique, a camera takes a picture of a split laser beam, one arm of which encounters the plasma and one that doesn’t. The arm that hits the plasma produces an interference pattern when the two arms are recombined. Capturing the result with a camera allows researchers to infer the structure of the plasma flows.

Another diagnostic method involves placing tiny loops of metal wire in the plasma (called B-dots), which record how the magnetic field in the plasma changes in time. Yet another way to study plasma physics is using a technique called Faraday rotation, which measures the twisting of polarized light as it passes through a magnetic field. The net result is an “image map of magnetic fields, which is really quite incredible,” Varnish says.

These diagnostic techniques help Varnish research magnetic reconnection, the process by which plasma breaks and reforms magnetic fields. It’s all about energy redistribution, Varnish says, and is particularly relevant because it creates solar flares. Varnish studies how having not-perfectly-opposite magnetic field lines might affect the reconnection process.

Most research in plasma physics can be neatly explained by the principles of magnetohydrodynamics, but the phenomena observed in Varnish’s experiments need to be explained with additional theories. Using pulsed power enables studies over longer length scales and time periods than in other experiments, such as laser-driven ones. Varnish is looking forward to working on simulations and follow-up experiments on PUFFIN to study these phenomena under slightly different conditions, which might shed new light on the processes.

At the moment, Varnish’s focus is on programming the control systems for PUFFIN so he can get it up and running. Part of the diagnostics system involves ensuring that the facility will deliver the plasma-inducing currents needed and perform as expected.

Aiding LGBTQ+ efforts

When not working on PUFFIN or his experiments, Varnish serves as co-lead of an LGBTQ+ affinity group at the PSFC, which he set up with a fellow doctoral student. The group offers a safe space for LGBTQ+ scientists and meets for lunch about once a month. “It's been a nice bit of community building, and I think it's important to support other LGBTQ+ scientists and make everyone feel welcome, even if it's just in small ways,” Varnish says, “It has definitely helped me to feel more comfortable knowing there’s a handful of fellow LGBTQ+ scientists at the center.”

Varnish has his hobbies going. One of his go-to bakes is a “rocky road,” a British chocolate bar that mixes chocolate, marshmallows, and graham crackers. His research interests, too, are a delicious concoction mixed together: “the intersection of plasma physics, laboratory astrophysics, astrophysics (the won’t-fit-in-a-lab kind), and instrumentation.”


Signal processing: How did we get to where we’re going?

In a retrospective talk spanning multiple decades, Professor Al Oppenheim looked back over the birth of digital signal processing and shared his thoughts on the future of the field.


On May 24, Ford Professor of Engineering Al Oppenheim addressed a standing-room-only audience at MIT to give the talk of a lifetime. Entitled “Signal Processing: How Did We Get to Where We’re Going?”, Oppenheim’s personal account of his involvement in the early years of the digital signal processing field included a photo retrospective — and some handheld historical artifacts — that showed just how far the field has come since its birth at MIT and Lincoln Laboratory. Hosted by Anantha Chandrakasan, chief innovation and strategy officer, dean of engineering, and the Vannevar Bush Professor of Electrical Engineering and Computer Science, the event included a lively Q & A, giving students the chance to gain Oppenheim’s insight about the trajectory of this ever-growing field.

Al Oppenheim received a ScD degree in 1964 at MIT and is also the recipient of an honorary doctorate from Tel Aviv University. During his career, he has been a member of the Research Laboratory of Electronics and closely affiliated with MIT Lincoln Laboratory and with the Woods Hole Oceanographic Institution. His research interests are in the general area of signal processing algorithms, systems, and applications. He is co-author of the widely used textbooks “Digital Signal Processing,” “Discrete-Time Signal Processing” (currently in its third edition), “Signals and Systems” (currently in its second edition), and most recently “Signals, Systems & Interference,” published in 2016. He is also the author of several video courses available online. He is editor of several advanced books on signal processing. Throughout his career he has published extensively in research journals and conference proceedings.

Oppenheim is a member of the National Academy of Engineering, an IEEE Life Fellow, and has been a  Guggenheim Fellow in France and a Sackler Fellow in Israel. He has received a number of IEEE awards for outstanding research, teaching, and mentoring, including the IEEE Kilby Medal; the IEEE Education Medal; the IEEE Centennial Award; the IEEE Third Millennium Medal; the Norbert Wiener Society award; and the Society, Technical Achievement, and Senior Awards of the IEEE Society on Acoustics, Speech and Signal Processing; as well as a number of research, teaching, and mentoring awards at MIT.

 


Summer 2024 reading from MIT

MIT News rounds up recent titles from Institute faculty and staff.


MIT faculty and staff authors have published a plethora of books, chapters, and other literary contributions in the past year. The following titles represent some of their works published in the past 12 months. In addition to links for each book from its publisher, the MIT Libraries has compiled a helpful list of the titles held in its collections.

Looking for more literary works from the MIT community? Enjoy our book lists from 2023, 2022, and 2021.

Happy reading!

Novel, memoir, and poetry

Seizing Control: Managing Epilepsy and Others’ Reactions to It — A Memoir” (Haley’s, 2023)
By Laura Beretsky, grant writer in the MIT Introduction to Technology, Engineering, and Science (MITES) program

Beretsky’s memoir, “Seizing Control,” details her journey with epilepsy, discrimination, and a major surgical procedure to reduce her seizures. After two surgical interventions, she has been seizure-free for eight years, though she notes she will always live with epilepsy.

Sky. Pond. Mouth.” (Yas Press, 2024)
By Kevin McLellan, staff member in MIT’s Program in Art, Culture, and Technology

In this book of poetry, physical and emotional qualities free-range between the animate and inanimate as though the world is written with dotted lines. With chiseled line breaks, intriguing meta-poetic levels, and punctuation like seed pods, McLellan’s poems, if we look twice, might flourish outside the book’s margin, past the grow light of the screen, even (especially) other borderlines we haven’t begun to imagine.

Science and engineering

The Visual Elements: Handbooks for Communicating Science and Engineering” (University of Chicago Press, 2023 and 2024)
By Felice Frankel, research scientist in chemical engineering

Each of the two books in the “Visual Elements” series focuses on a different aspect of scientific visual communication: photography on one hand and design on the other. Their unifying goal is to provide guidance for scientists and engineers who must communicate their work with the public, for grant applications, journal submissions, conference or poster presentations, and funding agencies. The books show researchers the importance of presenting their work in clear, concise, and appealing ways that also maintain scientific integrity.

A Book of Waves” (Duke University Press, 2023)
By Stefan Helmreich, professor of anthropology

In this book, Helmreich examines ocean waves as forms of media that carry ecological, geopolitical, and climatological news about our planet. Drawing on ethnographic work with oceanographers and coastal engineers in the Netherlands, the United States, Australia, Japan, and Bangladesh, he details how scientists at sea and in the lab apprehend waves’ materiality through abstractions, seeking to capture in technical language these avatars of nature at once periodic and irreversible, wild and pacific, ephemeral and eternal.

An Introduction to System Safety Engineering” (MIT Press, 2023)
By Nancy G. Leveson, professor of aeronautics and astronautics

Preventing accidents and losses in complex systems requires a holistic perspective that can accommodate unprecedented types of technology and design. Leveson’s book covers the history of safety engineering; explores risk, ethics, legal frameworks, and policy implications; and explains why accidents happen and how to mitigate risks in modern, software-intensive systems. It includes accounts of well-known accidents like the Challenger and Columbia space shuttle disasters, Deepwater Horizon oil spill, and Chernobyl and Fukushima nuclear accidents, examining their causes and how to prevent similar incidents in the future.

Solvable: How We Healed the Earth, and How We Can Do It Again” (University of Chicago Press, 2024)
By Susan Solomon, the Lee and Geraldine Martin Professor of Environmental Studies and Chemistry

We solved planet-threatening problems before, Solomon argues, and we can do it again. She knows firsthand what those solutions entail, as she gained international fame as the leader of a 1986 expedition to Antarctica, making discoveries that were key to healing the damaged ozone layer. She saw a path from scientific and public awareness to political engagement, international agreement, industry involvement, and effective action. Solomon connects this triumph to the stories of other past environmental victories — against ozone depletion, smog, pesticides, and lead — to extract the essential elements of what makes change possible.

Culture, humanities, and social sciences

Political Rumors: Why We Accept Misinformation and How to Fight It” (Princeton University Press, 2023)
By Adam Berinsky, professor of political science

Political rumors pollute the political landscape. But if misinformation crowds out the truth, how can democracy survive? Berinsky examines why political rumors exist and persist despite their unsubstantiated and refuted claims, who is most likely to believe them, and how to combat them. He shows that a tendency toward conspiratorial thinking and vehement partisan attachment fuel belief in rumors. Moreover, in fighting misinformation, it is as important to target the undecided and the uncertain as it is the true believers.

Laws of the Land: Fengshui and the State in Qing Dynasty China,” (Princeton University Press, 2023)
By Tristan Brown, assistant professor of history

In “Laws of the Land,” Brown tells the story of the important roles — especially legal ones — played by fengshui in Chinese society during China’s last imperial dynasty, the Manchu Qing (1644–1912). Employing archives from Mainland China and Taiwan that have only recently become available, this is the first book to document fengshui’s invocations in Chinese law during the Qing dynasty.

Trouble with Gender: Sex Facts, Gender Fictions” (Polity, 2024)
By Alex Byrne, professor of philosophy

MIT philosopher Alex Byrne knows that within his field, he’s very much in the minority when it comes to his views on sex and gender. In “Trouble with Gender,” Byrne suggests that some ideas regarding sex and gender have not been properly examined by philosophers, and he argues for a reasoned and civil conversation on the topic.

Life at the Center: Haitians and Corporate Catholicism in Boston (University of California Press, 2024)
By Erica Caple James, professor of medical anthropology and urban studies

In “Life at the Center,” James traces how faith-based and secular institutions in Boston have helped Haitian refugees and immigrants attain economic independence, health, security, and citizenship in the United States. The culmination of more than a decade of advocacy and research on behalf of the Haitians in Boston, this groundbreaking work exposes how Catholic corporations have strengthened — but also eroded — Haitians’ civic power.

Portable Postsocialisms: New Cuban Mediascapes after the End of History” (University of Texas Press, 2024)
By Paloma Duong, associate professor of media studies/writing

Why does Cuban socialism endure as an object of international political desire, while images of capitalist markets consume Cuba’s national imagination? “Portable Postsocialisms” calls on a vast multimedia archive to offer a groundbreaking cultural interpretation of Cuban postsocialism. Duong examines songs, artworks, advertisements, memes, literature, jokes, and networks that refuse exceptionalist and exoticizing visions of Cuba.

They All Made Peace — What Is Peace?” (University of Chicago Press, 2023)
Chapter by Lerna Ekmekcioglu, professor of history and director of the Program in Women’s and Gender Studies

In her chapter, Ekmekcioglu contends that the Treaty of Lausanne, which followed the first world war, is an often-overlooked event of great historical significance for Armenians. The treaty became the “birth certificate” of modern Turkey, but there was no redress for Armenians. The chapter uses new research to reconstruct the dynamics of the treaty negotiations, illuminating both Armenians’ struggles as well as the international community’s struggles to deliver consistent support for multiethnic, multireligious states.

We’ve Got You Covered: Rebooting American Health Care” (Portfolio, 2023)
By Amy Finkelstein, professor of economics, and Liran Einav

Few of us need convincing that the American health insurance system needs reform. But many existing proposals miss the point, focusing on expanding one relatively successful piece of the system or building in piecemeal additions. As Finkelstein and Einav point out, our health care system was never deliberately designed, but rather pieced together to deal with issues as they became politically relevant. The result is a sprawling, arbitrary, and inadequate mess that has left 30 million Americans without formal insurance. It’s time, the authors argue, to tear it all down and rebuild, sensibly and deliberately.

At the Pivot of East and West: Ethnographic, Literary and Filmic Arts” (Duke University Press, 2023)
By Michael M.J. Fischer, professor of anthropology and of science and technology studies

In his latest book, Fischer examines documentary filmmaking and literature from Southeast Asia and Singapore for their para-ethnographic insights into politics, culture, and aesthetics. Continuing his project of applying anthropological thinking to the creative arts, Fischer exemplifies how art and fiction trace the ways in which taken-for-granted common sense changes over time speak to the transnational present and track signals of the future before they surface in public awareness.

Lines Drawn across the Globe” (McGill-Queen's University Press, 2023)
By Mary Fuller, professor of literature and chair of the faculty

Around 1600, English geographer and cleric Richard Hakluyt published a 2,000-page collection of travel narratives, royal letters, ships’ logs, maps, and more from over 200 voyages. In "Lines Drawn across the Globe," Fuller traces the history of the book’s compilation and gives order and meaning to its diverse contents. From Sierra Leone to Iceland, from Spanish narratives of New Mexico to French accounts of the Saint Lawrence and Portuguese accounts of China, Hakluyt’s shaping of the book provides a conceptual map of the world’s regions and of England’s real and imagined relations to them.

The Rise and Fall of the EAST: How Exams, Autocracy, Stability, and Technology Brought China Success, and Why They Might Lead to Its Decline” (Yale University Press, 2023)
By Yasheng Huang, the Epoch Foundation Professor of International Management and professor of global economics and management

According to Huang, the world is seeing a repeat of Chinese history during which restrictions on economic and political freedom created economic stagnation. The bottom line: “Without academic collaboration, without business collaboration, without technological collaborations, the pace of Chinese technological progress is going to slow down dramatically.”

The Long First Millennium: Affluence, Architecture, and Its Dark Matter Economy (Routledge, 2023)
By Mark Jarzombek, professor of the history and theory of architecture

Jarzombek’s book argues that long-distance trade in luxury items — such as diamonds, gold, cinnamon, scented woods, ivory, and pearls, all of which require little overhead in their acquisition and were relatively easy to transport — played a foundational role in the creation of what we would call “global trade” in the first millennium CE. The book coins the term “dark matter economy” to better describe this complex — though mostly invisible — relationship to normative realities. “The Long Millennium” will appeal to students, scholars, and anyone interested in the effect of trade on medieval society.

World Literature in the Soviet Union” (Academic Studies Press, 2023)
Chapter by Maria Khotimsky, senior lecturer in Russian

Khotimsky’s chapter, “The Treasure Trove of World Literature: Shaping the Concept of World Literature in Post-Revolutionary Russia,” examines Vsemirnaia Literatura (World Literature), an early Soviet publishing house founded in 1919 in Petersburg that advanced an innovative canon of world literature beyond the European tradition. It analyzes the publishing house’s views on translation, focusing on book prefaces that reveal a search for a new evaluative system, adaptation to changing socio-cultural norms and reassessing the roles of readers, critics, and the very endeavor of translation.

Dare to Invent the Future: Knowledge in the Service of and Through Problem-Solving” (MIT Press, 2023)
By Clapperton Chakanetsa Mavhunga, professor of science, technology, and society

In this provocative book — the first in a trilogy — Chakanetsa Mavhunga argues that our critical thinkers must become actual thinker-doers. Taking its title from one of Thomas Sankara’s most inspirational speeches, “Dare to Invent the Future” looks for moments in Africa’s story where precedents of critical thought and knowledge in service of problem-solving are evident to inspire readers to dare to invent such a knowledge system.

Death, Dominance, and State-Building: The US in Iraq and the Future of American Military Intervention” (Oxford University Press, 2024)
By Roger Petersen, the Arthur and Ruth Sloan Professor of Political Science

“Death, Dominance, and State-Building” provides the first comprehensive analytic history of post-invasion Iraq. Although the war is almost universally derided as one of the biggest foreign policy blunders of the post-Cold War era, Petersen argues that the course and conduct of the conflict is poorly understood. The book applies an accessible framework to a variety of case studies across time and region. It concludes by drawing lessons relevant to future American military interventions.

Technology, systems, and society

Code Work: Hacking Across the U.S./México Techno-Borderlands” (Princeton University Press, 2023)
By Héctor Beltrán, assistant professor of anthropology

In this book, Beltrán examines Mexican and Latinx coders’ personal strategies of self-making as they navigate a transnational economy of tech work. Beltrán shows how these hackers apply concepts from the coding world to their lived experiences, deploying batches, loose coupling, iterative processing (looping), hacking, prototyping, and full-stack development in their daily social interactions — at home, in the workplace, on the dating scene, and in their understanding of the economy, culture, and geopolitics.

Unmasking AI: My Mission to Protect What is Human in a World of Machines” (Penguin Random House, 2023)
By Joy Buolamwini SM ’17, PhD ’22, member of the Media Lab Director’s Circle

To many it may seem like recent developments in artificial intelligence emerged out of nowhere to pose unprecedented threats to humankind. But to Buolamwini, this moment has been a long time in the making. “Unmasking AI” is the remarkable story of how Buolamwini uncovered what she calls “the coded gaze” — evidence of encoded discrimination and exclusion in tech products. She shows how racism, sexism, colorism, and ableism can overlap and render broad swaths of humanity “excoded” and therefore vulnerable in a world rapidly adopting AI tools.

Counting Feminicide: Data Feminism in Action” (MIT Press, 2024)
By Catherine D’Ignazio, associate professor of urban science and planning

“Counting Feminicide” brings to the fore the work of data activists across the Americas who are documenting feminicide, and challenging the reigning logic of data science by centering care, memory, and justice in their work. D’Ignazio describes the creative, intellectual, and emotional labor of feminicide data activists who are at the forefront of a data ethics that rigorously and consistently takes power and people into account.

Rethinking Cyber Warfare: The International Relations of Digital Disruption” (Oxford University Press, 2024)
By R. David Edelman, research fellow at the MIT Center for International Studies

Fifteen years into the era of “cyber warfare,” are we any closer to understanding the role a major cyberattack would play in international relations — or to preventing one? Uniquely spanning disciplines and enriched by the insights of a leading practitioner, Edelman provides a fresh understanding of the role that digital disruption plays in contemporary international security.

Model Thinking for Everyday Life: How to Make Smarter Decisions” (INFORMS, 2023)
By Richard Larson, professor post-tenure in the Institute for Data, Systems, and Society

Decisions are a part of everyday life, whether simple or complex. It’s all too easy to jump to Google for the answers, but where does that take us? We’re losing the ability to think critically and decide for ourselves. In this book, Larson asks readers to undertake a major mind shift in our everyday thought processes. Model thinking develops our critical thinking skills, using a framework of conceptual and mathematical tools to help guide us to full comprehension, and better decisions.

Future[tectonics]: Exploring the intersection between technology, architecture and urbanism” (Parametric Architecture, 2024)
Chapter by Jacob Lehrer, project coordinator in the Department of Mathematics

In his chapter, “Garbage In, Garbage Out: How Language Models Can Reinforce Biases,” Lehrer discusses how inherent bias is baked into large data sets, like those used to train massive AI algorithms, and how society will need to reconcile with the inherent biases built into systems of power. He also attempts to reconcile with it himself, delving into the mathematics behind these systems.

Music and Mind: Harnessing the Arts for Health and Wellness” (Penguin Random House, 2024)
Chapter by Tod Machover, the Muriel R. Cooper Professor of Music and Media; Rébecca Kleinberger SM ’14, PhD ’20; and Alexandra Rieger SM ’18, doctoral candidate in media arts and sciences

In their chapter, “Composing the Future of Health,” the co-authors discuss their approach to combining scientific research, technology innovation, and new composing strategies to create evidence-based, emotionally potent music that can delight and heal.

The Heart and the Chip: Our Bright Future with Robots” (W. W. Norton and Company, 2024)
By Daniela Rus, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science and director of the Computer Science and Artificial Intelligence Laboratory; and Gregory Mone

In “The Heart and the Chip,” Rus and Mone provide an overview of the interconnected fields of robotics, artificial intelligence, and machine learning, and reframe the way we think about intelligent machines while weighing the moral and ethical consequences of their role in society. Robots aren’t going to steal our jobs, they argue; they’re going to make us more capable, productive, and precise.

Education, business, finance, and social impact

Disciplined Entrepreneurship Startup Tactics: 15 Tactics to Turn Your Business Plan Into a Business” (Wiley, 2024)
By Paul Cheek, executive director and entrepreneur in residence at the Martin Trust Center for MIT Entrepreneurship and senior lecturer in the MIT Sloan School of Management, with foreword by Bill Aulet, professor of the practice of entrepreneurship at MIT Sloan and managing director of the Martin Trust Center

Cheek provides a hands-on, practical roadmap to get from great idea to successful company with his actionable field guide to transforming your one great idea into a functional, funded, and staffed startup. Readers will find ground-level, down-and-dirty entrepreneurial tactics — like how to conduct advanced primary market research, market and sell to your first customers, and take a scrappy approach to building your first products — that keep young firms growing. These tactics maximize impact with limited resources.

Organic Social Media: How to Build Flourishing Online Communities” (KoganPage, 2023)
By Jenny Li Fowler, director of social media strategy in the Institute Office of Communications

In “Organic Social Media,” Fowler outlines the important steps that social media managers need to take to enhance an organization's broader growth objectives. Fowler breaks down the key questions to help readers determine the best platforms to invest in, how they can streamline approval processes, and other essential strategic steps to create an organic following on social platforms.

From Intention to Impact: A Practical Guide to Diversity, Equity, and Inclusion” (MIT Press, 2024)
By Malia Lazu, lecturer in the MIT Sloan School of Management

In her new book, Lazu draws on her background as a community organizer, her corporate career as a bank president, and now her experience as a leading consultant to explain what has been holding organizations back and what they can do to become more inclusive and equitable. “From Intention to Impact” goes beyond “feel good” PR-centric actions to showcase the real work that must be done to create true and lasting change.

The AFIRE Guide to U.S. Real Estate Investing” (Afire and McGraw Hill, 2024)
Chapter by Jacques Gordon, lecturer in the MIT Center for Real Estate

In his chapter, “The Broker and the Investment Advisor: A wide range of options,” Gordon discusses important financial topics including information for lenders and borrowers, joint ventures, loans and debt, comingled funds, bankruptcy, and Islamic finance.

The Geek Way: The Radical Mindset That Drives Extraordinary Results” (Hachette Book Group, 2023)
By Andrew McAfee, principal research scientist and co-director of the MIT Initiative on the Digital Economy

The geek way of management delivers excellent performance while offering employees a work environment that features high levels of autonomy and empowerment. In what Eric Schmidt calls a “handbook for disruptors,” “The Geek Way” reveals a new way to get big things done. It will change the way readers think about work, teams, projects, and culture, and give them the insight and tools to harness our human superpowers of learning and cooperation.

Iterate: The Secret to Innovation in Schools” (Teaching Systems Lab, 2023)
By Justin Reich, associate professor in comparative media studies/writing

In “Iterate,” Reich delivers an insightful bridge between contemporary educational research and classroom teaching, showing readers how to leverage the cycle of experiment and experience to create a compelling and engaging learning environment. Readers learn how to employ a process of continuous improvement and tinkering to develop exciting new programs, activities, processes, and designs.

red helicopter — a parable for our times: lead change with kindness (plus a little math)” (HarperCollins, 2024)
By James Rhee, senior lecturer in the MIT Sloan School of Management

Is it possible to be successful and kind? To lead a company or organization with precision and compassion? To honor who we are in all areas of our lives? While eloquently sharing a story of personal and professional success, Rhee presents a comforting yet bold solution to the dissatisfaction and worry we all feel in a chaotic and sometimes terrifying world.

Routes to Reform: Education Politics in Latin America” (Oxford University Press, 2024)
By Ben Ross Schneider, the Ford International Professor of Political Science and faculty director of the MIT-Chile Program and MISTI Chile

In “Routes to Reform,” Ben Ross Schneider examines education policy throughout Latin America to show that reforms to improve learning — especially making teacher careers more meritocratic and less political — are possible. He demonstrates that contrary to much established theory, reform outcomes in Latin America depended less on institutions and broad coalitions, and more on micro-level factors like civil society organizations, teacher unions, policy networks, and technocrats.

Wiring the Winning Organization: Liberating Our Collective Greatness through Slowification, Simplification, and Amplification” (IT Revolution, 2023)
By Steven J. Spear, senior lecturer in system dynamics at the MIT Sloan School of Management, and Gene Kim

Organizations succeed when they design their processes, routines, and procedures to encourage employees to problem-solve and contribute to a common purpose. DevOps, Lean, and Agile got us part of the way. Now with “Wiring the Winning Organization,” Spear and Kim introduce a new theory of organizational management: Organizations win by using three mechanisms to slowify, simplify, and amplify, which systematically moves problem-solving from high-risk danger zones to low-risk winning zones.

Oxford Research Encyclopedia of Economics and Finance” (Oxford University Press, 2024)
Chapter by Annie Thompson, lecturer in the MIT Center for Real Estate; Walter Torous, senior lecturer at the MIT Center for Real Estate; and William Torous

In their chapter, “What Causes Residential Mortgage Defaults?” the authors assess the voluminous research investigating why households default on their residential mortgages. A particular focus is oriented towards critically evaluating the recent application of causal statistical inference to residential defaults on mortgages.

Data Is Everybody’s Business: The Fundamentals of Data Monetization” (MIT Press, 2023)
By Barbara H. Wixom, principal research scientist at the MIT Sloan Center for Information Systems Research (MIT CISR); Leslie Owens, senior lecturer at the MIT Sloan School of Management and former executive director of MIT CISR; and Cynthia M. Beath

In “Data Is Everybody’s Business,” the authors offer a clear and engaging way for people across the entire organization to understand data monetization and make it happen. The authors identify three viable ways to convert data into money — improving work with data, wrapping products with data, and selling information offerings — and explain when to pursue each and how to succeed.

Arts, architecture, planning, and design

The Routledge Handbook of Museums, Heritage, and Death” (Routledge, 2023)
Chapter by Laura Anderson Barbata, lecturer in MIT’s Program in Art, Culture, and Technology

This book provides an examination of death, dying, and human remains in museums and heritage sites around the world. In her chapter, “Julia Pastrana’s Long Journey Home,” Barbata describes the case of Julia Pastrana (1834-1860), an indigenous Mexican opera singer who suffered from hypertrichosis terminalis and hyperplasia gingival. Due to her appearance, Pastrana was exploited and exhibited for over 150 years, during her lifetime and after her early death in an embalmed state. Barbata sheds light on the ways in which the systems that justified Pastrana’s exploitation continue to operate today.

Emergency INDEX: An Annual Document of Performance Practice, vol. 10” (Ugly Duckling Press, 2023)
Chapter by Gearoid Dolan, staff member in MIT’s Program in Art, Culture, and Technology

This “bible of performance art activity” documents performance projects from around the world. Dolan’s chapter describes “Protest ReEmbodied,” a performance that took place online during Covid-19 lockdown. The performance was a live version of the ongoing “Protest ReEmbodied” project, an app that individuals can download and run on their computer to be able to perform on camera, inserted into protest footage.

Land Air Sea: Architecture and Environment in the Early Modern Era” (Brill, 2023)
Chapter by Caroline Murphy, the Clarence H. Blackall Career Development Assistant Professor in the Department of Architecture

“Land Air Sea” positions the long Renaissance and 18th century as being vital for understanding how many of the concerns present in contemporary debates on climate change and sustainability originated in earlier centuries. Murphy’s chapter examines how Girolamo di Pace da Prato, a state engineer in the Duchy of Florence, understood and sought to mitigate the problems of alluvial flooding in the mid-sixteenth century, an era of exceptional aquatic and environmental volatility.

Miscellaneous

Made Here: Recipes and Reflections From NYC’s Asian Communities” (Send Chinatown Love, 2023)
Chapter by Robin Zhang, postdoc in mathematics, and Diana Le

In their chapter, “Flushing: The Melting Pot’s Melting Pot,” the authors explore how Flushing, New York — whose Chinatown is the largest and fastest growing in the world — earned the title of the “melting pot’s melting pot” through its cultural history. Readers will walk down its streets past its snack stalls, fabric stores, language schools, hair salons, churches, and shrines, and you will hear English interspersed with Korean, several dialects of Chinese, Hindi, Bengali, Urdu, and hundreds of other fibers that make up Flushing’s complex ethnolinguistic fabric.


How to increase the rate of plastics recycling

A national bottle deposit fee could make a dramatic difference in reducing plastic waste, MIT researchers report.


While recycling systems and bottle deposits have become increasingly widespread in the U.S., actual rates of recycling are “abysmal,” according to a team of MIT researchers who studied the rates for recycling of PET, the plastic commonly used in beverage bottles. However, their findings suggest some ways to change this.

The present rate of recycling for PET, or polyethylene terephthalate, bottles nationwide is about 24 percent and has remained stagnant for a decade, the researchers say. But their study indicates that with a nationwide bottle deposit program, the rates could increase to 82 percent, with nearly two-thirds of all PET bottles being recycled into new bottles, at a net cost of just a penny a bottle when demand is robust. At the same time, they say, policies would be needed to ensure a sufficient demand for the recycled material.

The findings are being published today in the Journal of Industrial Ecology, in a paper by MIT professor of materials science and engineering Elsa Olivetti, graduate students Basuhi Ravi and Karan Bhuwalka, and research scientist Richard Roth.

The team looked at PET bottle collection and recycling rates in different states as well as other nations with and without bottle deposit policies, and with or without curbside recycling programs, as well as the inputs and outputs of various recycling companies and methods. The researchers say this study is the first to look in detail at the interplay between public policies and the end-to-end realities of the packaging production and recycling market.

They found that bottle deposit programs are highly effective in the areas where they are in place, but at present there is not nearly enough collection of used bottles to meet the targets set by the packaging industry. Their analysis suggests that a uniform nationwide bottle deposit policy could achieve the levels of recycling that have been mandated by proposed legislation and corporate commitments.

The recycling of PET is highly successful in terms of quality, with new products made from all-recycled material virtually matching the qualities of virgin material. And brands have shown that new bottles can be safely made with 100 percent postconsumer waste. But the team found that collection of the material is a crucial bottleneck that leaves processing plants unable to meet their needs. However, with the right policies in place, “one can be optimistic,” says Olivetti, who is the Jerry McAfee Professor in Engineering and the associate dean of the School of Engineering.

“A message that we have found in a number of cases in the recycling space is that if you do the right work to support policies that think about both the demand but also the supply,” then significant improvements are possible, she says. “You have to think about the response and the behavior of multiple actors in the system holistically to be viable,” she says. “We are optimistic, but there are many ways to be pessimistic if we’re not thinking about that in a holistic way.”

For example, the study found that it is important to consider the needs of existing municipal waste-recovery facilities. While expanded bottle deposit programs are essential to increase recycling rates and provide the feedstock to companies recycling PET into new products, the current facilities that process material from curbside recycling programs will lose revenue from PET bottles, which are a relatively high-value product compared to the other materials in the recycled waste stream. These companies would lose a source of their income if the bottles are collected through deposit programs, leaving them with only the lower-value mixed plastics.

The researchers developed economic models based on rates of collection found in the states with deposit programs, recycled-content requirements, and other policies, and used these models to extrapolate to the nation as a whole. Overall, they found that the supply needs of packaging producers could be met through a nationwide bottle deposit system with a 10-cent deposit per bottle — at a net cost of about 1 cent per bottle produced when demand is strong. This need not be a federal program, but rather one where the implementation would be left up to the individual states, Olivetti says.

Other countries have been much more successful in implementing deposit systems that result in very high participation rates. Several European countries manage to collect more than 90 percent of PET bottles for recycling, for example. But in the U.S., less than 29 percent are collected, and after losses in the recycling chain about 24 percent actually get recycled, the researchers found. Whereas 73 percent of Americans have access to curbside recycling, presently only 10 states have bottle deposit systems in place.

Yet the demand is there so far. “There is a market for this material,” says Olivetti. While bottles collected through mixed-waste collection can still be recycled to some extent, those collected through deposit systems tend to be much cleaner and require less processing, and so are more economical to recycle into new bottles, or into textiles.

To be effective, policies need to not just focus on increasing rates of recycling, but on the whole cycle of supply and demand and the different players involved, Olivetti says. Safeguards would need to be in place to protect existing recycling facilities from the lost revenues they would suffer as a result of bottle deposits, perhaps in the form of subsidies funded by fees on the bottle producers, to avoid putting these essential parts of the processing chain out of business. And other policies may be needed to ensure the continued market for the material that gets collected, including recycled content requirements and extended producer responsibility regulations, the team found.

At this stage, it’s important to focus on the specific waste streams that can most effectively be recycled, and PET, along with many metals, clearly fit that category. “When we start to think about mixed plastic streams, that’s much more challenging from an environmental perspective,” she says. “Recycling systems need to be pursuing extended producers’ responsibility, or specifically thinking about materials designed more effectively toward recycled content,” she says.

It's also important to address “what the right metrics are to design for sustainably managed materials streams,” she says. “It could be energy use, could be circularity [for example, making old bottles into new bottles], could be around waste reduction, and making sure those are all aligned. That’s another kind of policy coordination that’s needed.”


Pioneering the future of materials extraction

MIT spinout SiTration looks to disrupt industries with a revolutionary process for recovering and extracting critical materials.


The next time you cook pasta, imagine that you are cooking spaghetti, rigatoni, and seven other varieties all together, and they need to be separated onto 10 different plates before serving. A colander can remove the water — but you still have a mound of unsorted noodles.
 
Now imagine that this had to be done for thousands of tons of pasta a day. That gives you an idea of the scale of the problem facing Brendan Smith PhD ’18, co-founder and CEO of SiTration, a startup formed out of MIT’s Department of Materials Science and Engineering (DMSE) in 2020.
 
SiTration, which raised $11.8 million in seed capital led by venture capital firm 2150 earlier this month, is revolutionizing the extraction and refining of copper, cobalt, nickel, lithium, precious metals, and other materials critical to manufacturing clean-energy technologies such as electric motors, wind turbines, and batteries. Its initial target applications are recovering the materials from complex mining feed streams, spent lithium-ion batteries from electric vehicles, and various metals refining processes.
 
The company’s breakthrough lies in a new silicon membrane technology that can be adjusted to efficiently recover disparate materials, providing a more sustainable and economically viable alternative to conventional, chemically intensive processes. Think of a colander with adjustable pores to strain different types of pasta. SiTration’s technology has garnered interest from industry players, including mining giant Rio Tinto.
 
Some observers may question whether targeting such different industries could cause the company to lose focus. “But when you dig into these markets, you discover there is actually a significant overlap in how all of these materials are recovered, making it possible for a single solution to have impact across verticals,” Smith says.

Powering up materials recovery

Conventional methods of extracting critical materials in mining, refining, and recycling lithium-ion batteries involve heavy use of chemicals and heat, which harm the environment. Typically, raw ore from mines or spent batteries are ground into fine particles before being dissolved in acid or incinerated in a furnace. Afterward, they undergo intensive chemical processing to separate and purify the valuable materials.
 
“It requires as much as 10 tons of chemical input to produce one ton of critical material recovered from the mining or battery recycling feedstock,” says Smith. Operators can then sell the recaptured materials back into the supply chain, but suffer from wide swings in profitability due to uncertain market prices. Lithium prices have been the most volatile, having surged more than 400 percent before tumbling back to near-original levels in the past two years. Despite their poor economics and negative environmental impact, these processes remain the state of the art today.
 
By contrast, SiTration is electrifying the critical-materials recovery process, improving efficiency, producing less chemical waste, and reducing the use of chemicals and heat. What’s more, the company’s processing technology is built to be highly adaptable, so it can handle all kinds of materials.
 
The core technology is based on work done at MIT to develop a novel type of membrane made from silicon, which is durable enough to withstand harsh chemicals and high temperatures while conducting electricity. It’s also highly tunable, meaning it can be modified or adjusted to suit different conditions or target specific materials.
 
SiTration’s technology also incorporates electro-extraction, a technique that uses electrochemistry to further isolate and extract specific target materials. This powerful combination of methods in a single system makes it more efficient and effective at isolating and recovering valuable materials, Smith says. So depending on what needs to be separated or extracted, the filtration and electro-extraction processes are adjusted accordingly.
 
“We can produce membranes with pore sizes from the molecular scale up to the size of a human hair in diameter, and everything in between. Combined with the ability to electrify the membrane and separate based on a material’s electrochemical properties, this tunability allows us to target a vast array of different operations and separation applications across industrial fields,” says Smith.
 
Efficient access to materials like lithium, cobalt, and copper — and precious metals like platinum, gold, silver, palladium, and rare-earth elements — is key to unlocking innovation in business and sustainability as the world moves toward electrification and away from fossil fuels.

“This is an era when new materials are critical,” says Professor Jeffrey Grossman, co-founder and chief scientist of SiTration and the Morton and Claire Goulder and Family Professor in Environmental Systems at DMSE. “For so many technologies, they’re both the bottleneck and the opportunity, offering tremendous potential for non-incremental advances. And the role they’re having in commercialization and in entrepreneurship cannot be overstated.”

SiTration’s commercial frontier

Smith became interested in separation technology in 2013 as a PhD student in Grossman’s DMSE research group, which has focused on the design of new membrane materials for a range of applications. The two shared a curiosity about separation of critical materials and a hunger to advance the technology. After years of study under Grossman’s mentorship, and with support from several MIT incubators and foundations including the Deshpande Center for Technological Innovation, the Kavanaugh Fellowship, MIT Sandbox, and Venture Mentoring Service, Smith was ready to officially form SiTration in 2020. Grossman has a seat on the board and plays an active role as a strategic and technical advisor.
 
Grossman is involved in several MIT spinoffs and embraces the different imperatives of research versus commercialization. “At SiTration, we’re driving this technology to work at scale. There’s something super exciting about that goal,” he says. “The challenges that come with scaling are very different than the challenges that come in a university lab.” At the same time, although not every research breakthrough becomes a commercial product, open-ended, curiosity-driven knowledge pursuit holds its own crucial value, he adds.

It has been rewarding for Grossman to see his technically gifted student and colleague develop a host of other skills the role of CEO demands. Getting out to the market and talking about the technology with potential partners, putting together a dynamic team, discovering the challenges facing industry, drumming up support, early on — those became the most pressing activities on Smith’s agenda.
 
“What’s most fun to me about being a CEO of an early-stage startup is that there are 100 different factors, most people-oriented, that you have to navigate every day. Each stakeholder has different motivations and objectives. And you basically try to fit that all together, to create value for our partners and customers, the company, and for society,” says Smith. “You start with just an idea, and you have to keep leveraging that to form a more and more tangible product, to multiply and progress commercial relationships, and do it all at an ever-expanding scale.”
 
MIT DNA runs deep in the nine-person company, with DMSE grad and former Grossman student Jatin Patil as director of product; Ahmed Helal, from MIT’s Department of Mechanical Engineering, as vice president of research and development; Daniel Bregante, from the Department of Chemistry, as VP of technology; and Sarah Melvin, from the departments of Physics and Political Science, as VP of strategy and operations. Melvin is the first hire devoted to business development. Smith plans to continue expanding the team following the closing of the company’s seed round.  

Strategic alliances

Being a good communicator was important when it came to securing funding, Smith says. SiTration received $2.35 million in pre-seed funding in 2022 led by Azolla Ventures, which reserves its $239 million in investment capital for startups that would not otherwise easily obtain funding. “We invest only in solution areas that can achieve gigaton-scale climate impact by 2050,” says Matthew Nordan, a general partner at Azolla and now SiTration board member. The MIT-affiliated E14 Fund also contributed to the pre-seed round; Azolla and E14 both participated in the recent seed funding round.
 
“Brendan demonstrated an extraordinary ability to go from being a thoughtful scientist to a business leader and thinker who has punched way above his weight in engaging with customers and recruiting a well-balanced team and navigating tricky markets,” says Nordan.
 
One of SiTration’s first partnerships is with Rio Tinto, one of the largest mining companies in the world. As SiTration evaluated various uses cases in its early days, identifying critical materials as its target market, Rio Tinto was looking for partners to recover valuable metals such as cobalt and copper from the wastewater generated at mines. These metals were typically trapped in the water, creating harmful waste and resulting in lost revenue.
 
“We thought this was a great innovation challenge and posted it on our website to scout for companies to partner with who can help us solve this water challenge,” said Nick Gurieff, principal advisor for mine closure, in an interview with MIT’s Industrial Liaison Program in 2023.
 
At SiTration, mining was not yet a market focus, but Smith couldn’t help noticing that Rio Tinto’s needs were in alignment with what his young company offered. SiTration submitted its proposal in August 2022.
 
Gurieff said SiTration’s tunable membrane set it apart. The companies formed a business partnership in June 2023, with SiTration adjusting its membrane to handle mine wastewater and incorporating Rio Tinto feedback to refine the technology. After running tests with water from mine sites, SiTration will begin building a small-scale critical-materials recovery unit, followed by larger-scale systems processing up to 100 cubic meters of water an hour.

SiTration’s focused technology development with Rio Tinto puts it in a good position for future market growth, Smith says. “Every ounce of effort and resource we put into developing our product is geared towards creating real-world value. Having an industry-leading partner constantly validating our progress is a tremendous advantage.”

It’s a long time from the days when Smith began tinkering with tiny holes in silicon in Grossman’s DMSE lab. Now, they work together as business partners who are scaling up technology to meet a global need. Their joint passion for applying materials innovation to tough problems has served them well. “Materials science and engineering is an engine for a lot of the innovation that is happening today,” Grossman says. “When you look at all of the challenges we face to make the transition to a more sustainable planet, you realize how many of these are materials challenges.”


MIT researchers identify routes to stronger titanium alloys

The new design approach could be used to produce metals with exceptional combinations of strength and ductility, for aerospace and other applications.


Titanium alloys are essential structural materials for a wide variety of applications, from aerospace and energy infrastructure to biomedical equipment. But like most metals, optimizing their properties tends to involve a tradeoff between two key characteristics: strength and ductility. Stronger materials tend to be less deformable, and deformable materials tend to be mechanically weak.

Now, researchers at MIT, collaborating with researchers at ATI Specialty Materials, have discovered an approach for creating new titanium alloys that can exceed this historical tradeoff, leading to new alloys with exceptional combinations of strength and ductility, which might lead to new applications.

The findings are described in the journal Advanced Materials, in a paper by Shaolou Wei ScD ’22, Professor C. Cem Tasan, postdoc Kyung-Shik Kim, and John Foltz from ATI Inc. The improvements, the team says, arise from tailoring the chemical composition and the lattice structure of the alloy, while also adjusting the processing techniques used to produce the material at industrial scale.

Titanium alloys have been important because of their exceptional mechanical properties, corrosion resistance, and light weight when compared to steels for example. Through careful selection of the alloying elements and their relative proportions, and of the way the material is processed, “you can create various different structures, and this creates a big playground for you to get good property combinations, both for cryogenic and elevated temperatures,” Tasan says.

But that big assortment of possibilities in turn requires a way to guide the selections to produce a material that meets the specific needs of a particular application. The analysis and experimental results described in the new study provide that guidance.

The structure of titanium alloys, all the way down to atomic scale, governs their properties, Tasan explains. And in some titanium alloys, this structure is even more complex, made up of two different intermixed phases, known as the alpha and beta phases.

“The key strategy in this design approach is to take considerations of different scales,” he says. “One scale is the structure of individual crystal. For example, by choosing the alloying elements carefully, you can have a more ideal crystal structure of the alpha phase that enables particular deformation mechanisms. The other scale is the polycrystal scale, that involves interactions of the alpha and beta phases. So, the approach that’s followed here involves design considerations for both.”

In addition to choosing the right alloying materials and proportions, steps in the processing turned out to play an important role. A technique called cross-rolling is another key to achieving the exceptional combination of strength and ductility, the team found.

Working together with ATI researchers, the team tested a variety of alloys under a scanning electron microscope as they were being deformed, revealing details of how their microstructures respond to external mechanical load. They found that there was a particular set of parameters — of composition, proportions, and processing method — that yielded a structure where the alpha and beta phases shared the deformation uniformly, mitigating the cracking tendency that is likely to occur between the phases when they respond differently. “The phases deform in harmony,” Tasan says. This cooperative response to deformation can yield a superior material, they found.

“We looked at the structure of the material to understand these two phases and their morphologies, and we looked at their chemistries by carrying out local chemical analysis at the atomic scale. We adopted a wide variety of techniques to quantify various properties of the material across multiple length scales, says Tasan, who is the POSCO Professor of Materials Science and Engineering and an associate professor of metallurgy. “When we look at the overall properties” of the titanium alloys produced according to their system, “the properties are really much better than comparable alloys.”

This was industry-supported academic research aimed at proving design principles for alloys that can be commercially produced at scale, according to Tasan. “What we do in this collaboration is really toward a fundamental understanding of crystal plasticity,” he says. “We show that this design strategy is validated, and we show scientifically how it works,” he adds, noting that there remains significant room for further improvement.

As for potential applications of these findings, he says, “for any aerospace application where an improved combination of strength and ductility are useful, this kind of invention is providing new opportunities.”

The work was supported by ATI Specialty Rolled Products and used facilities of MIT.nano and the Center for Nanoscale Systems at Harvard University.


Implantable microphone could lead to fully internal cochlear implants

This tiny, biocompatible sensor may overcome one of the biggest hurdles that prevent the devices from being completely implanted.


Cochlear implants, tiny electronic devices that can provide a sense of sound to people who are deaf or hard of hearing, have helped improve hearing for more than a million people worldwide, according to the National Institutes of Health.

However, cochlear implants today are only partially implanted, and they rely on external hardware that typically sits on the side of the head. These components restrict users, who can’t, for instance, swim, exercise, or sleep while wearing the external unit, and they may cause others to forgo the implant altogether.

On the way to creating a fully internal cochlear implant, a multidisciplinary team of researchers at MIT, Massachusetts Eye and Ear, Harvard Medical School, and Columbia University has produced an implantable microphone that performs as well as commercial external hearing aid microphones. The microphone remains one of the largest roadblocks to adopting a fully internalized cochlear implant.

This tiny microphone, a sensor produced from a biocompatible piezoelectric material, measures miniscule movements on the underside of the ear drum. Piezoelectric materials generate an electric charge when compressed or stretched. To maximize the device’s performance, the team also developed a low-noise amplifier that enhances the signal while minimizing noise from the electronics.

While many challenges must be overcome before such a microphone could be used with a cochlear implant, the collaborative team looks forward to further refining and testing this prototype, which builds off work begun at MIT and Mass Eye and Ear more than a decade ago.

“It starts with the ear doctors who are with this every day of the week, trying to improve people’s hearing, recognizing a need, and bringing that need to us. If it weren’t for this team collaboration, we wouldn’t be where we are today,” says Jeffrey Lang, the Vitesse Professor of Electrical Engineering, a member of the Research Laboratory of Electronics (RLE), and co-senior author of a paper on the microphone.

Lang’s coauthors include co-lead authors Emma Wawrzynek, an electrical engineering and computer science (EECS) graduate student, and Aaron Yeiser SM ’21; as well as mechanical engineering graduate student John Zhang; Lukas Graf and Christopher McHugh of Mass Eye and Ear; Ioannis Kymissis, the Kenneth Brayer Professor of Electrical Engineering at Columbia; Elizabeth S. Olson, a professor of biomedical engineering and auditory biophysics at Columbia; and co-senior author Hideko Heidi Nakajima, an associate professor of otolaryngology-head and neck surgery at Harvard Medical School and Mass Eye and Ear. The research is published today in the Journal of Micromechanics and Microengineering.

Overcoming an implant impasse

Cochlear implant microphones are usually placed on the side of the head, which means that users can’t take advantage of noise filtering and sound localization cues provided by the structure of the outer ear.

Fully implantable microphones offer many advantages. But most devices currently in development, which sense sound under the skin or motion of middle ear bones, can struggle to capture soft sounds and wide frequencies.

For the new microphone, the team targeted a part of the middle ear called the umbo. The umbo vibrates unidirectionally (inward and outward), making it easier to sense these simple movements.

Although the umbo has the largest range of movement of the middle-ear bones, it only moves by a few nanometers. Developing a device to measure such diminutive vibrations presents its own challenges.

On top of that, any implantable sensor must be biocompatible and able to withstand the body’s humid, dynamic environment without causing harm, which limits the materials that can be used.

“Our goal is that a surgeon implants this device at the same time as the cochlear implant and internalized processor, which means optimizing the surgery while working around the internal structures of the ear without disrupting any of the processes that go on in there,” Wawrzynek says.

With careful engineering, the team overcame these challenges.

They created the UmboMic, a triangular, 3-millimeter by 3-millimeter motion sensor composed of two layers of a biocompatible piezoelectric material called polyvinylidene difluoride (PVDF). These PVDF layers are sandwiched on either side of a flexible printed circuit board (PCB), forming a microphone that is about the size of a grain of rice and 200 micrometers thick. (An average human hair is about 100 micrometers thick.)

The narrow tip of the UmboMic would be placed against the umbo. When the umbo vibrates and pushes against the piezoelectric material, the PVDF layers bend and generate electric charges, which are measured by electrodes in the PCB layer.

Amplifying performance

The team used a “PVDF sandwich” design to reduce noise. When the sensor is bent, one layer of PVDF produces a positive charge and the other produces a negative charge. Electrical interference adds to both equally, so taking the difference between the charges cancels out the noise.

Using PVDF provides many advantages, but the material made fabrication especially difficult. PVDF loses its piezoelectric properties when exposed to temperatures above around 80 degrees Celsius, yet very high temperatures are needed to vaporize and deposit titanium, another biocompatible material, onto the sensor. Wawrzynek worked around this problem by depositing the titanium gradually and employing a heat sink to cool the PVDF.

But developing the sensor was only half the battle — umbo vibrations are so tiny that the team needed to amplify the signal without introducing too much noise. When they couldn’t find a suitable low-noise amplifier that also used very little power, they built their own.

With both prototypes in place, the researchers tested the UmboMic in human ear bones from cadavers and found that it had robust performance within the intensity and frequency range of human speech. The microphone and amplifier together also have a low noise floor, which means they could distinguish very quiet sounds from the overall noise level.

“One thing we saw that was really interesting is that the frequency response of the sensor is influenced by the anatomy of the ear we are experimenting on, because the umbo moves slightly differently in different people’s ears,” Wawrzynek says.

The researchers are preparing to launch live animal studies to further explore this finding. These experiments will also help them determine how the UmboMic responds to being implanted.

In addition, they are studying ways to encapsulate the sensor so it can remain in the body safely for up to 10 years but still be flexible enough to capture vibrations. Implants are often packaged in titanium, which would be too rigid for the UmboMic. They also plan to explore methods for mounting the UmboMic that won’t introduce vibrations.

“The results in this paper show the necessary broad-band response and low noise needed to act as an acoustic sensor. This result is surprising, because the bandwidth and noise floor are so competitive with the commercial hearing aid microphone. This performance shows the promise of the approach, which should inspire others to adopt this concept. I would expect that smaller size sensing elements and lower power electronics would be needed for next generation devices to enhance ease of implantation and battery life issues,” says Karl Grosh, professor of mechanical engineering at the University of Michigan, who was not involved with this work.

This research was funded, in part, by the National Institutes of Health, the National Science Foundation, the Cloetta Foundation in Zurich, Switzerland, and the Research Fund of the University of Basel, Switzerland.


A prosthesis driven by the nervous system helps people with amputation walk naturally

A new surgical procedure gives people more neural feedback from their residual limb. With it, seven patients walked more naturally and navigated obstacles.


State-of-the-art prosthetic limbs can help people with amputations achieve a natural walking gait, but they don’t give the user full neural control over the limb. Instead, they rely on robotic sensors and controllers that move the limb using predefined gait algorithms.

Using a new type of surgical intervention and neuroprosthetic interface, MIT researchers, in collaboration with colleagues from Brigham and Women’s Hospital, have shown that a natural walking gait is achievable using a prosthetic leg fully driven by the body’s own nervous system. The surgical amputation procedure reconnects muscles in the residual limb, which allows patients to receive “proprioceptive” feedback about where their prosthetic limb is in space.

In a study of seven patients who had this surgery, the MIT team found that they were able to walk faster, avoid obstacles, and climb stairs much more naturally than people with a traditional amputation.

“This is the first prosthetic study in history that shows a leg prosthesis under full neural modulation, where a biomimetic gait emerges. No one has been able to show this level of brain control that produces a natural gait, where the human’s nervous system is controlling the movement, not a robotic control algorithm,” says Hugh Herr, a professor of media arts and sciences, co-director of the K. Lisa Yang Center for Bionics at MIT, an associate member of MIT’s McGovern Institute for Brain Research, and the senior author of the new study.

Patients also experienced less pain and less muscle atrophy following this surgery, which is known as the agonist-antagonist myoneural interface (AMI). So far, about 60 patients around the world have received this type of surgery, which can also be done for people with arm amputations.

Hyungeun Song, a postdoc in MIT’s Media Lab, is the lead author of the paper, which appears today in Nature Medicine.

Sensory feedback

Most limb movement is controlled by pairs of muscles that take turns stretching and contracting. During a traditional below-the-knee amputation, the interactions of these paired muscles are disrupted. This makes it very difficult for the nervous system to sense the position of a muscle and how fast it’s contracting — sensory information that is critical for the brain to decide how to move the limb.

People with this kind of amputation may have trouble controlling their prosthetic limb because they can’t accurately sense where the limb is in space. Instead, they rely on robotic controllers built into the prosthetic limb. These limbs also include sensors that can detect and adjust to slopes and obstacles.

To try to help people achieve a natural gait under full nervous system control, Herr and his colleagues began developing the AMI surgery several years ago. Instead of severing natural agonist-antagonist muscle interactions, they connect the two ends of the muscles so that they still dynamically communicate with each other within the residual limb. This surgery can be done during a primary amputation, or the muscles can be reconnected after the initial amputation as part of a revision procedure.

“With the AMI amputation procedure, to the greatest extent possible, we attempt to connect native agonists to native antagonists in a physiological way so that after amputation, a person can move their full phantom limb with physiologic levels of proprioception and range of movement,” Herr says.

In a 2021 study, Herr’s lab found that patients who had this surgery were able to more precisely control the muscles of their amputated limb, and that those muscles produced electrical signals similar to those from their intact limb.

After those encouraging results, the researchers set out to explore whether those electrical signals could generate commands for a prosthetic limb and at the same time give the user feedback about the limb’s position in space. The person wearing the prosthetic limb could then use that proprioceptive feedback to volitionally adjust their gait as needed.

In the new Nature Medicine study, the MIT team found this sensory feedback did indeed translate into a smooth, near-natural ability to walk and navigate obstacles.

“Because of the AMI neuroprosthetic interface, we were able to boost that neural signaling, preserving as much as we could. This was able to restore a person's neural capability to continuously and directly control the full gait, across different walking speeds, stairs, slopes, even going over obstacles,” Song says.

A natural gait

For this study, the researchers compared seven people who had the AMI surgery with seven who had traditional below-the-knee amputations. All of the subjects used the same type of bionic limb: a prosthesis with a powered ankle as well as electrodes that can sense electromyography (EMG) signals from the tibialis anterior the gastrocnemius muscles. These signals are fed into a robotic controller that helps the prosthesis calculate how much to bend the ankle, how much torque to apply, or how much power to deliver.

The researchers tested the subjects in several different situations: level-ground walking across a 10-meter pathway, walking up a slope, walking down a ramp, walking up and down stairs, and walking on a level surface while avoiding obstacles.

In all of these tasks, the people with the AMI neuroprosthetic interface were able to walk faster — at about the same rate as people without amputations — and navigate around obstacles more easily. They also showed more natural movements, such as pointing the toes of the prosthesis upward while going up stairs or stepping over an obstacle, and they were better able to coordinate the movements of their prosthetic limb and their intact limb. They were also able to push off the ground with the same amount of force as someone without an amputation.

“With the AMI cohort, we saw natural biomimetic behaviors emerge,” Herr says. “The cohort that didn’t have the AMI, they were able to walk, but the prosthetic movements weren’t natural, and their movements were generally slower.”

These natural behaviors emerged even though the amount of sensory feedback provided by the AMI was less than 20 percent of what would normally be received in people without an amputation.

“One of the main findings here is that a small increase in neural feedback from your amputated limb can restore significant bionic neural controllability, to a point where you allow people to directly neurally control the speed of walking, adapt to different terrain, and avoid obstacles,” Song says.

“This work represents yet another step in us demonstrating what is possible in terms of restoring function in patients who suffer from severe limb injury. It is through collaborative efforts such as this that we are able to make transformational progress in patient care,” says Matthew Carty, a surgeon at Brigham and Women’s Hospital and associate professor at Harvard Medical School, who is also an author of the paper.

Enabling neural control by the person using the limb is a step toward Herr’s lab’s goal of “rebuilding human bodies,” rather than having people rely on ever more sophisticated robotic controllers and sensors — tools that are powerful but do not feel like part of the user’s body.

“The problem with that long-term approach is that the user would never feel embodied with their prosthesis. They would never view the prosthesis as part of their body, part of self,” Herr says. “The approach we’re taking is trying to comprehensively connect the brain of the human to the electromechanics.”

The research was funded by the MIT K. Lisa Yang Center for Bionics and the Eunice Kennedy Shriver National Institute of Child Health and Human Development.


“Rollerama” roller rink opens in Kendall Square

A summertime installation by MIT’s real estate group features free roller-skating and fun activities for the broader community.


The former U.S. Department of Transportation (DOT) Volpe Center site — now named “Kendall Common” in anticipation of its transformation into a vibrant mixed-use development — is now activated and open to all this summer. “Rollerama at Kendall Common” offers free roller-skating and roller skate rentals, community programming, and family-friendly events through September.

“We are extremely excited to bring Kendall Common to life in a way that is inviting and authentically Cambridge, while channeling MIT’s spirit of innovation throughout the project,” says Patrick Rowe, senior vice president, MIT Investment Management Co. “This parcel of land — right in the heart of Kendall Square — has been closed off to local residents and visitors for far too long, and we look forward to opening it up and making it accessible for all to utilize and enjoy.”

Located at the corner of Broadway and Third Street, Rollerama offers specialty themed skating nights and live entertainment, as well as food and beverage from local restaurants for purchase. Optional skate rental donations will be directed to local nonprofits. A highlight of the space is a new 7,000 square foot mural by Boston-based artist Massiel Grullón featuring retro-inspired shapes.

The first of two opening weekends took place June 28-30; the next one will be July 5-7 from 2-8 p.m. on Fridays, and 11 a.m. to 8 p.m. on Saturdays and Sundays. From July 10 through Sept. 29, Rollerama will be open Wednesdays, Thursdays, and Fridays from 2-8 p.m., and on Saturdays and Sundays from 11 a.m. to 8 p.m.

“We’re delighted to see this underutilized space activated with vibrant and playful programming,” says Jess Smith, director of MIT Open Space Programming. “Rollerama will add to the energy of Kendall Square and provide yet another compelling reason for employees, residents, students, and visitors to mix and mingle here. With food and drink available from Cambridge partners and voluntary donations going to Cambridge nonprofits, these activities in Kendall Common will contribute significantly to the sense of community in Kendall.”

The activation of Kendall Common will complement other new additions MIT has recently brought to the Kendall Square neighborhood, including Ripple Café, Row 34, Life Alive Café, Locke Bar, and Flat Top Johnny’s, along with the MIT Museum and MIT Press Book Store.

MIT assumed ownership of 10 acres of the former U.S. DOT Volpe Center site in Kendall Square earlier this year, and will commence infrastructure and site preparation for the redevelopment this fall. Over the coming years, MIT aims to transform Kendall Common into a vibrant, mixed-use development that will strengthen connections in the Cambridge community through new open green spaces, housing, retail offerings, restaurants, a community center, and science and innovation facilities.

Kendall Common will eventually include four residential buildings, four commercial buildings, four parks and a community center. Designed to be an inclusive and equitable urban environment with a focus on sustainability, the development is intended to nurture and inspire the local community.

For more information visit the Kendall Common website, Instagram page, and Facebook page.


Scientists observe record-setting electron mobility in a new crystal film

The newly synthesized material could be the basis for wearable thermoelectric and spintronic devices.


A material with a high electron mobility is like a highway without traffic. Any electrons that flow into the material experience a commuter’s dream, breezing through without any obstacles or congestion to slow or scatter them off their path.

The higher a material’s electron mobility, the more efficient its electrical conductivity, and the less energy is lost or wasted as electrons zip through. Advanced materials that exhibit high electron mobility will be essential for more efficient and sustainable electronic devices that can do more work with less power.

Now, physicists at MIT, the Army Research Lab, and elsewhere have achieved a record-setting level of electron mobility in a thin film of ternary tetradymite — a class of mineral that is naturally found in deep hydrothermal deposits of gold and quartz.

For this study, the scientists grew pure, ultrathin films of the material, in a way that minimized defects in its crystalline structure. They found that this nearly perfect film — much thinner than a human hair — exhibits the highest electron mobility in its class.

The team was able to estimate the material’s electron mobility by detecting quantum oscillations when electric current passes through. These oscillations are a signature of the quantum mechanical behavior of electrons in a material. The researchers detected a particular rhythm of oscillations that is characteristic of high electron mobility — higher than any ternary thin films of this class to date.

“Before, what people had achieved in terms of electron mobility in these systems was like traffic on a road under construction — you’re backed up, you can’t drive, it’s dusty, and it’s a mess,” says Jagadeesh Moodera, a senior research scientist in MIT’s Department of Physics. “In this newly optimized material, it’s like driving on the Mass Pike with no traffic.”

The team’s results, which appear today in the journal Materials Today Physics, point to ternary tetradymite thin films as a promising material for future electronics, such as wearable thermoelectric devices that efficiently convert waste heat into electricity. (Tetradymites are the active materials that cause the cooling effect in commercial thermoelectric coolers.) The material could also be the basis for spintronic devices, which process information using an electron’s spin, using far less power than conventional silicon-based devices.

The study also uses quantum oscillations as a highly effective tool for measuring a material’s electronic performance.

“We are using this oscillation as a rapid test kit,” says study author Hang Chi, a former research scientist at MIT who is now at the University of Ottawa. “By studying this delicate quantum dance of electrons, scientists can start to understand and identify new materials for the next generation of technologies that will power our world.”

Chi and Moodera’s co-authors include Patrick Taylor, formerly of MIT Lincoln Laboratory, along with Owen Vail and Harry Hier of the Army Research Lab, and Brandi Wooten and Joseph Heremans of Ohio State University.

Beam down

The name “tetradymite” derives from the Greek “tetra” for “four,” and “dymite,” meaning “twin.” Both terms describe the mineral’s crystal structure, which consists of rhombohedral crystals that are “twinned” in groups of four — i.e. they have identical crystal structures that share a side.

Tetradymites comprise combinations of bismuth, antimony tellurium, sulfur, and selenium. In the 1950s, scientists found that tetradymites exhibit semiconducting properties that could be ideal for thermoelectric applications: The mineral in its bulk crystal form was able to passively convert heat into electricity.

Then, in the 1990s, the late Institute Professor Mildred Dresselhaus proposed that the mineral’s thermoelectric properties might be significantly enhanced, not in its bulk form but within its microscopic, nanometer-scale surface, where the interactions of electrons is more pronounced. (Heremans happened to work in Dresselhaus’ group at the time.)

“It became clear that when you look at this material long enough and close enough, new things will happen,” Chi says. “This material was identified as a topological insulator, where scientists could see very interesting phenomena on their surface. But to keep uncovering new things, we have to master the material growth.”

To grow thin films of pure crystal, the researchers employed molecular beam epitaxy — a method by which a beam of molecules is fired at a substrate, typically in a vacuum, and with precisely controlled temperatures. When the molecules deposit on the substrate, they condense and build up slowly, one atomic layer at a time. By controlling the timing and type of molecules deposited, scientists can grow ultrathin crystal films in exact configurations, with few if any defects.

“Normally, bismuth and tellurium can interchange their position, which creates defects in the crystal,” co-author Taylor explains. “The system we used to grow these films came down with me from MIT Lincoln Laboratory, where we use high purity materials to minimize impurities to undetectable limits. It is the perfect tool to explore this research.”

Free flow

The team grew thin films of ternary tetradymite, each about 100 nanometers thin. They then tested the film’s electronic properties by looking for Shubnikov-de Haas quantum oscillations — a phenomenon that was discovered by physicists Lev Shubnikov and Wander de Haas, who found that a material’s electrical conductivity can oscillate when exposed to a strong magnetic field at low temperatures. This effect occurs because the material’s electrons fill up specific energy levels that shift as the magnetic field changes.

Such quantum oscillations could serve as a signature of a material’s electronic structure, and the ways in which electrons behave and interact. Most notably for the MIT team, the oscillations could determine a material’s electron mobility: If oscillations exist, it must mean that the material’s electrical resistance is able to change, and by inference, electrons can be mobile, and made to easily flow.

The team looked for signs of quantum oscillations in their new films, by first exposing them to ultracold temperatures and a strong magnetic field, then running an electric current through the film and measuring the voltage along its path, as they tuned the magnetic field up and down.

“It turns out, to our great joy and excitement, that the material’s electrical resistance oscillates,” Chi says. “Immediately, that tells you that this has very high electron mobility.”

Specifically, the team estimates that the ternary tetradymite thin film exhibits an electron mobility of 10,000 cm2/V-s — the highest mobility of any ternary tetradymite film yet measured. The team suspects that the film’s record mobility has something to do with its low defects and impurities, which they were able to minimize with their precise growth strategies. The fewer a material’s defects, the fewer obstacles an electron encounters, and the more freely it can flow.

“This is showing it’s possible to go a giant step further, when properly controlling these complex systems,” Moodera says. “This tells us we’re in the right direction, and we have the right system to proceed further, to keep perfecting this material down to even much thinner films and proximity coupling for use in future spintronics and wearable thermoelectric devices.”

This research was supported in part by the Army Research Office, National Science Foundation, Office of Naval Research, Canada Research Chairs Program and Natural Sciences and Engineering Research Council of Canada.


Study reveals why AI models that analyze medical images can be biased

These models, which can predict a patient’s race, gender, and age, seem to use those traits as shortcuts when making medical diagnoses.


Artificial intelligence models often play a role in medical diagnoses, especially when it comes to analyzing images such as X-rays. However, studies have found that these models don’t always perform well across all demographic groups, usually faring worse on women and people of color.

These models have also been shown to develop some surprising abilities. In 2022, MIT researchers reported that AI models can make accurate predictions about a patient’s race from their chest X-rays — something that the most skilled radiologists can’t do.

That research team has now found that the models that are most accurate at making demographic predictions also show the biggest “fairness gaps” — that is, discrepancies in their ability to accurately diagnose images of people of different races or genders. The findings suggest that these models may be using “demographic shortcuts” when making their diagnostic evaluations, which lead to incorrect results for women, Black people, and other groups, the researchers say.

“It’s well-established that high-capacity machine-learning models are good predictors of human demographics such as self-reported race or sex or age. This paper re-demonstrates that capacity, and then links that capacity to the lack of performance across different groups, which has never been done,” says Marzyeh Ghassemi, an MIT associate professor of electrical engineering and computer science, a member of MIT’s Institute for Medical Engineering and Science, and the senior author of the study.

The researchers also found that they could retrain the models in a way that improves their fairness. However, their approached to “debiasing” worked best when the models were tested on the same types of patients they were trained on, such as patients from the same hospital. When these models were applied to patients from different hospitals, the fairness gaps reappeared.

“I think the main takeaways are, first, you should thoroughly evaluate any external models on your own data because any fairness guarantees that model developers provide on their training data may not transfer to your population. Second, whenever sufficient data is available, you should train models on your own data,” says Haoran Zhang, an MIT graduate student and one of the lead authors of the new paper. MIT graduate student Yuzhe Yang is also a lead author of the paper, which appears today in Nature Medicine. Judy Gichoya, an associate professor of radiology and imaging sciences at Emory University School of Medicine, and Dina Katabi, the Thuan and Nicole Pham Professor of Electrical Engineering and Computer Science at MIT, are also authors of the paper.

Removing bias

As of May 2024, the FDA has approved 882 AI-enabled medical devices, with 671 of them designed to be used in radiology. Since 2022, when Ghassemi and her colleagues showed that these diagnostic models can accurately predict race, they and other researchers have shown that such models are also very good at predicting gender and age, even though the models are not trained on those tasks.

“Many popular machine learning models have superhuman demographic prediction capacity — radiologists cannot detect self-reported race from a chest X-ray,” Ghassemi says. “These are models that are good at predicting disease, but during training are learning to predict other things that may not be desirable.”

In this study, the researchers set out to explore why these models don’t work as well for certain groups. In particular, they wanted to see if the models were using demographic shortcuts to make predictions that ended up being less accurate for some groups. These shortcuts can arise in AI models when they use demographic attributes to determine whether a medical condition is present, instead of relying on other features of the images.

Using publicly available chest X-ray datasets from Beth Israel Deaconess Medical Center in Boston, the researchers trained models to predict whether patients had one of three different medical conditions: fluid buildup in the lungs, collapsed lung, or enlargement of the heart. Then, they tested the models on X-rays that were held out from the training data.

Overall, the models performed well, but most of them displayed “fairness gaps” — that is, discrepancies between accuracy rates for men and women, and for white and Black patients.

The models were also able to predict the gender, race, and age of the X-ray subjects. Additionally, there was a significant correlation between each model’s accuracy in making demographic predictions and the size of its fairness gap. This suggests that the models may be using demographic categorizations as a shortcut to make their disease predictions.

The researchers then tried to reduce the fairness gaps using two types of strategies. For one set of models, they trained them to optimize “subgroup robustness,” meaning that the models are rewarded for having better performance on the subgroup for which they have the worst performance, and penalized if their error rate for one group is higher than the others.

In another set of models, the researchers forced them to remove any demographic information from the images, using “group adversarial” approaches. Both strategies worked fairly well, the researchers found.

“For in-distribution data, you can use existing state-of-the-art methods to reduce fairness gaps without making significant trade-offs in overall performance,” Ghassemi says. “Subgroup robustness methods force models to be sensitive to mispredicting a specific group, and group adversarial methods try to remove group information completely.”

Not always fairer

However, those approaches only worked when the models were tested on data from the same types of patients that they were trained on — for example, only patients from the Beth Israel Deaconess Medical Center dataset.

When the researchers tested the models that had been “debiased” using the BIDMC data to analyze patients from five other hospital datasets, they found that the models’ overall accuracy remained high, but some of them exhibited large fairness gaps.

“If you debias the model in one set of patients, that fairness does not necessarily hold as you move to a new set of patients from a different hospital in a different location,” Zhang says.

This is worrisome because in many cases, hospitals use models that have been developed on data from other hospitals, especially in cases where an off-the-shelf model is purchased, the researchers say.

“We found that even state-of-the-art models which are optimally performant in data similar to their training sets are not optimal — that is, they do not make the best trade-off between overall and subgroup performance — in novel settings,” Ghassemi says. “Unfortunately, this is actually how a model is likely to be deployed. Most models are trained and validated with data from one hospital, or one source, and then deployed widely.”

The researchers found that the models that were debiased using group adversarial approaches showed slightly more fairness when tested on new patient groups than those debiased with subgroup robustness methods. They now plan to try to develop and test additional methods to see if they can create models that do a better job of making fair predictions on new datasets.

The findings suggest that hospitals that use these types of AI models should evaluate them on their own patient population before beginning to use them, to make sure they aren’t giving inaccurate results for certain groups.

The research was funded by a Google Research Scholar Award, the Robert Wood Johnson Foundation Harold Amos Medical Faculty Development Program, RSNA Health Disparities, the Lacuna Fund, the Gordon and Betty Moore Foundation, the National Institute of Biomedical Imaging and Bioengineering, and the National Heart, Lung, and Blood Institute.


Leaning into the immune system’s complexity

By designing new tools that can analyze huge libraries of immune cells and their targets, Michael Birnbaum hopes to generate better T cell therapies for cancer and other diseases.


At any given time, millions of T cells circulate throughout the human body, looking for potential invaders. Each of those T cells sports a different T cell receptor, which is specialized to recognize a foreign antigen.

To make it easier to understand how that army of T cells recognizes their targets, MIT Associate Professor Michael Birnbaum has developed tools that can be used to study huge numbers of these interactions at the same time.

Deciphering those interactions could eventually help researchers find new ways to reprogram T cells to target specific antigens, such as mutations found in a cancer patient’s tumor.

“T-cells are so diverse in terms of what they recognize and what they do, and there's been incredible progress in understanding this on an example-by-example basis. Now, we want to be able to understand the entirety of this process with some of the same level of sophistication that we understand the individual pieces. And we think that once we have that understanding, then we can be much better at manipulating it to positively affect disease,” Birnbaum says.

This approach could lead to improvements in immunotherapy to treat cancer, as well as potential new treatments for autoimmune disorders such as type 1 diabetes, or infections such as HIV and Covid-19.

Tackling difficult problems

Birnbaum’s interest in immunology developed early, when he was a high school student in Philadelphia. His school offered a program allowing students to work in research labs in the area, so starting in tenth grade, he did research in an immunology lab at Fox Chase Cancer Center.

“I got exposed to some of the same things I study now, actually, and so that really set me on the path of realizing that this is what I wanted to do,” Birnbaum says.

As an undergraduate at Harvard University, he enrolled in a newly established major known as chemical and physical biology. During an introductory immunology course, Birnbaum was captivated by the complexity and beauty of the immune system. He went on to earn a PhD in immunology at Stanford University, where he began to study how T cells recognize their target antigens.

T cell receptors are protein complexes found on the surfaces of T cells. These receptors are made of gene segments that can be mixed and matched to form up to 1015 different sequences. When a T cell receptor finds a foreign antigen that it recognizes, it signals the T cell to multiply and begin the process of eliminating the cells that display that antigen.

As a graduate student, Birnbaum worked on building tools to study interactions between antigens and T cells at large scales. After finishing his PhD, he spent a year doing a postdoc in a neuroscience lab at Stanford, but quickly realized he wanted to get back to immunology.

In 2016, Birnbaum was hired as a faculty member in MIT’s Department of Biological Engineering and the Koch Institute for Integrative Cancer Research. He was drawn to MIT, he says, by the willingness of scientists and engineers at the Institute to work together to take on difficult, important problems.

“There's a fearlessness to how people were willing to do that,” he says. “And the community, particularly the immunology community here, was second to none, both in terms of its quality, but also in terms of how supportive it was.”

Billions of targets

At MIT, Birnbaum’s lab focuses on T cell-antigen interactions, with the hope of eventually being able to reprogram those interactions to help fight diseases such as cancer. In 2022, he reported a new technique for analyzing these interactions at large scales.

Until then, most existing tools for studying the immune system were designed to allow for the study of a large pool of antigens exposed to one T cell (or B cell), or a large pool of immune cells encountering a small number of antigens. Birnbaum’s new method uses engineered viruses to present many different antigens to huge populations of immune cells, allowing researchers to screen huge libraries of both antigens and immune cells at the same time.

“The immune system works with millions of unique T cell receptors in each of us, and billions of possible antigen targets,” Birnbaum says. “In order to be able to really understand the immune system at scale, we spend a lot of time trying to build tools that can work at similar scales.”

This approach could enable researchers to eventually screen thousands of antigens against an entire population of B cells and T cells from an individual, which could reveal why some people naturally fight off certain viruses, such as HIV, better than others.

Using this method, Birnbaum also hopes to develop ways to reprogram T cells inside a patient’s body. Currently, T cell reprogramming requires T cells to be removed from a patient, genetically altered, and then reinfused into the patient. All of these steps could be skipped if instead the T cells were reprogrammed using the same viruses that Birnbaum’s screening technology uses. A company called Kelonia, co-founded by Birnbaum, is also working toward this goal.

To model T cell interactions at even larger scales, Birnbaum is now working with collaborators around the world to use artificial intelligence to make computational predictions of T cell-antigen interactions. The research team, which Birnbaum is leading, includes 12 labs from five countries, funded by Cancer Grand Challenges. The researchers hope to build predictive models that may help them design engineered T cells that could help treat many different diseases.

“The program is put together with a focus on whether these types of predictions are possible, but if they are, it could lead to much better understanding of what immunotherapies may work with different people. It could lead to personalized vaccine design, and it could lead to personalized T cell therapy design,” Birnbaum says.


Scientists use computational modeling to guide a difficult chemical synthesis

Using this new approach, researchers could develop drug compounds with unique pharmaceutical properties.


Researchers from MIT and the University of Michigan have discovered a new way to drive chemical reactions that could generate a wide variety of compounds with desirable pharmaceutical properties.

These compounds, known as azetidines, are characterized by four-membered rings that include nitrogen. Azetidines have traditionally been much more difficult to synthesize than five-membered nitrogen-containing rings, which are found in many FDA-approved drugs.

The reaction that the researchers used to create azetidines is driven by a photocatalyst that excites the molecules from their ground energy state. Using computational models that they developed, the researchers were able to predict compounds that can react with each other to form azetidines using this kind of catalysis.

“Going forward, rather than using a trial-and-error process, people can prescreen compounds and know beforehand which substrates will work and which ones won't,” says Heather Kulik, an associate professor of chemistry and chemical engineering at MIT.

Kulik and Corinna Schindler, a professor of chemistry at the University of Michigan, are the senior authors of the study, which appears today in Science. Emily Wearing, recently a graduate student at the University of Michigan, is the lead author of the paper. Other authors include University of Michigan postdoc Yu-Cheng Yeh, MIT graduate student Gianmarco Terrones, University of Michigan graduate student Seren Parikh, and MIT postdoc Ilia Kevlishvili.

Light-driven synthesis

Many naturally occurring molecules, including vitamins, nucleic acids, enzymes and hormones, contain five-membered nitrogen-containing rings, also known as nitrogen heterocycles. These rings are also found in more than half of all FDA-approved small-molecule drugs, including many antibiotics and cancer drugs.

Four-membered nitrogen heterocycles, which are rarely found in nature, also hold potential as drug compounds. However, only a handful of existing drugs, including penicillin, contain four-membered heterocycles, in part because these four-membered rings are much more difficult to synthesize than five-membered heterocycles.

In recent years, Schindler’s lab has been working on synthesizing azetidines using light to drive a reaction that combines two precursors, an alkene and an oxime. These reactions require a photocatalyst, which absorbs light and passes the energy to the reactants, making it possible for them to react with each other.

“The catalyst can transfer that energy to another molecule, which moves the molecules into excited states and makes them more reactive. This is a tool that people are starting to use to make it possible to make certain reactions occur that wouldn't normally occur,” Kulik says.

Schindler’s lab found that while this reaction sometimes worked well, other times it did not, depending on which reactants were used. They enlisted Kulik, an expert in developing computational approaches to modeling chemical reactions, to help them figure out how to predict when these reactions will occur.

The two labs hypothesized that whether a particular alkene and oxime will react together in a photocatalyzed reaction depends on a property known as the frontier orbital energy match. Electrons that surround the nucleus of an atom exist in orbitals, and quantum mechanics can be used to predict the shape and energies of these orbitals. For chemical reactions, the most important electrons are those in the outermost, highest energy (“frontier”) orbitals, which are available to react with other molecules.

Kulik and her students used density functional theory, which uses the Schrödinger equation to predict where electrons could be and how much energy they have, to calculate the orbital energy of these outermost electrons.

These energy levels are also affected by other groups of atoms attached to the molecule, which can change the properties of the electrons in the outermost orbitals.

Once those energy levels are calculated, the researchers can identify reactants that have similar energy levels when the photocatalyst boosts them into an excited state. When the excited states of an alkene and an oxime are closely matched, less energy is required to boost the reaction to its transition state — the point at which the reaction has enough energy to go forward to form products.

Accurate predictions

After calculating the frontier orbital energies for 16 different alkenes and nine oximes, the researchers used their computational model to predict whether 18 different alkene-oxime pairs would react together to form an azetidine. With the calculations in hand, these predictions can be made in a matter of seconds.

The researchers also modeled a factor that influences the overall yield of the reaction: a measure of how available the carbon atoms in the oxime are to participate in chemical reactions.

The model’s predictions suggested that some of these 18 reactions won’t occur or won’t give a high enough yield. However, the study also showed that a significant number of reactions are correctly predicted to work.

“Based on our model, there's a much wider range of substrates for this azetidine synthesis than people thought before. People didn't really think that all of this was accessible,” Kulik says.

Of the 27 combinations that they studied computationally, the researchers tested 18 reactions experimentally, and they found that most of their predictions were accurate. Among the compounds they synthesized were derivatives of two drug compounds that are currently FDA-approved: amoxapine, an antidepressant, and indomethacin, a pain reliever used to treat arthritis.

This computational approach could help pharmaceutical companies predict molecules that will react together to form potentially useful compounds, before spending a lot of money to develop a synthesis that might not work, Kulik says. She and Schindler are continuing to work together on other kinds of novel syntheses, including the formation of compounds with three-membered rings.

“Using photocatalysts to excite substrates is a very active and hot area of development, because people have exhausted what you can do on the ground state or with radical chemistry,” Kulik says. “I think this approach is going to have a lot more applications to make molecules that are normally thought of as really challenging to make.”


Wireless receiver blocks interference for better mobile device performance

This novel circuit architecture cancels out unwanted signals at the earliest opportunity.


The growing prevalence of high-speed wireless communication devices, from 5G mobile phones to sensors for autonomous vehicles, is leading to increasingly crowded airwaves. This makes the ability to block interfering signals that can hamper device performance an even more important — and more challenging — problem.

With these and other emerging applications in mind, MIT researchers demonstrated a new millimeter-wave multiple-input-multiple-output (MIMO) wireless receiver architecture that can handle stronger spatial interference than previous designs. MIMO systems have multiple antennas, enabling them to transmit and receive signals from different directions. Their wireless receiver senses and blocks spatial interference at the earliest opportunity, before unwanted signals have been amplified, which improves performance.

Key to this MIMO receiver architecture is a special circuit that can target and cancel out unwanted signals, known as a nonreciprocal phase shifter. By making a novel phase shifter structure that is reconfigurable, low-power, and compact, the researchers show how it can be used to cancel out interference earlier in the receiver chain.

Their receiver can block up to four times more interference than some similar devices. In addition, the interference-blocking components can be switched on and off as needed to conserve energy.

In a mobile phone, such a receiver could help mitigate signal quality issues that can lead to slow and choppy Zoom calling or video streaming.

“There is already a lot of utilization happening in the frequency ranges we are trying to use for new 5G and 6G systems. So, anything new we are trying to add should already have these interference-mitigation systems installed. Here, we’ve shown that using a nonreciprocal phase shifter in this new architecture gives us better performance. This is quite significant, especially since we are using the same integrated platform as everyone else,” says Negar Reiskarimian, the X-Window Consortium Career Development Assistant Professor in the Department of Electrical Engineering and Computer Science (EECS), a member of the Microsystems Technology Laboratories and Research Laboratory of Electronics (RLE), and the senior author of a paper on this receiver.

Reiskarimian wrote the paper with EECS graduate students Shahabeddin Mohin, who is the lead author, Soroush Araei, and Mohammad Barzgari, an RLE postdoc. The work was recently presented at the IEEE Radio Frequency Circuits Symposium and received the Best Student Paper Award.

Blocking interference

Digital MIMO systems have an analog and a digital portion. The analog portion uses antennas to receive signals, which are amplified, down-converted, and passed through an analog-to-digital converter before being processed in the digital domain of the device. In this case, digital beamforming is required to retrieve the desired signal.

But if a strong, interfering signal coming from a different direction hits the receiver at the same time as a desired signal, it can saturate the amplifier so the desired signal is drowned out. Digital MIMOs can filter out unwanted signals, but this filtering occurs later in the receiver chain. If the interference is amplified along with the desired signal, it is more difficult to filter out later.

“The output of the initial low-noise amplifier is the first place you can do this filtering with minimal penalty, so that is exactly what we are doing with our approach,” Reiskarimian says.

The researchers built and installed four nonreciprocal phase shifters immediately at the output of the first amplifier in each receiver chain, all connected to the same node. These phase shifters can pass signal in both directions and sense the angle of an incoming interfering signal. The devices can adjust their phase until they cancel out the interference.

The phase of these devices can be precisely tuned, so they can sense and cancel an unwanted signal before it passes to the rest of the receiver, blocking interference before it affects any other parts of the receiver. In addition, the phase shifters can follow signals to continue blocking interference if it changes location.

“If you start getting disconnected or your signal quality goes down, you can turn this on and mitigate that interference on the fly. Because ours is a parallel approach, you can turn it on and off with minimal effect on the performance of the receiver itself,” Reiskarimian adds.

A compact device

In addition to making their novel phase shifter architecture tunable, the researchers designed them to use less space on the chip and consume less power than typical nonreciprocal phase shifters.

Once the researchers had done the analysis to show their idea would work, their biggest challenge was translating the theory into a circuit that achieved their performance goals. At the same time, the receiver had to meet strict size restrictions and a tight power budget, or it wouldn’t be useful in real-world devices.

In the end, the team demonstrated a compact MIMO architecture on a 3.2-square-millimeter chip that could block signals which were up to four times stronger than what other devices could handle. Simpler than typical designs, their phase shifter architecture is also more energy efficient.

Moving forward, the researchers want to scale up their device to larger systems, as well as enable it to perform in the new frequency ranges utilized by 6G wireless devices. These frequency ranges are prone to powerful interference from satellites. In addition, they would like to adapt nonreciprocal phase shifters to other applications.

This research was supported, in part, by the MIT Center for Integrated Circuits and Systems.


Melissa Choi named director of MIT Lincoln Laboratory

With decades of experience working across the laboratory’s R&D areas, Choi brings a focus on collaboration, technical excellence, and unity.


Melissa Choi has been named the next director of MIT Lincoln Laboratory, effective July 1. Currently assistant director of the laboratory, Choi succeeds Eric Evans, who will step down on June 30 after 18 years as director.

Sharing the news in a letter to MIT faculty and staff today, Vice President for Research Ian Waitz noted Choi’s 25-year career of “outstanding technical and advisory leadership,” both at MIT and in service to the defense community.

“Melissa has a marvelous technical breadth as well as excellent leadership and management skills, and she has presented a compelling strategic vision for the Laboratory,” Waitz wrote. “She is a thoughtful, intuitive leader who prioritizes communication, collaboration, mentoring, and professional development as foundations for an organizational culture that advances her vision for Lab-wide excellence in service to the nation.”

Choi’s appointment marks a new chapter in Lincoln Laboratory’s storied history working to keep the nation safe and secure. As a federally funded research and development center operated by MIT for the Department of Defense, the laboratory has provided the government an independent perspective on critical science and technology issues of national interest for more than 70 years. Distinctive among national R&D labs, the laboratory specializes in both long-term system development and rapid demonstration of operational prototypes, to protect and defend the nation against advanced threats. In tandem with its role in developing technology for national security, the laboratory’s integral relationship with the MIT campus community enables impactful partnerships on fundamental research, teaching, and workforce development in critical science and technology areas.

“In a time of great global instability and fast-evolving threats, the mission of Lincoln Laboratory has never been more important to the nation,” says MIT President Sally Kornbluth. “It is also vital that the laboratory apply government-funded, cutting-edge technologies to solve critical problems in fields from space exploration to climate change. With her depth and breadth of experience, keen vision, and straightforward style, Melissa Choi has earned enormous trust and respect across the Lincoln and MIT communities. As Eric Evans steps down, we could not ask for a finer successor.”

Choi has served as assistant director of Lincoln Laboratory since 2019, with oversight of five of the Lab’s nine technical divisions: Biotechnology and Human Systems, Homeland Protection and Air Traffic Control, Cyber Security and Information Sciences, Communication Systems, and ISR and Tactical Systems. Engaging deeply with the needs of the broader defense community, Choi served for six years on the Air Force Scientific Advisory Board, with a term as vice chair, and was appointed to the DoD’s Threat Reduction Advisory Committee. She is currently a member of the national Defense Science Board’s Permanent Subcommittee on Threat Reduction.

Having dedicated her entire career to Lincoln Laboratory, Choi says her long tenure reflects a commitment to the lab’s work and community.

“Through my career, I have been fortunate to have had incredibly innovative and motivated people to collaborate with as we solve critical national security challenges,” Choi says. “Continuing to work with such a strong, laboratory-wide team as director is one of the most exciting aspects of the job for me.”

Success through collaboration

Choi came to Lincoln Laboratory as a technical staff member in 1999, with a doctoral degree in applied mathematics. As she progressed to lead research teams, including the Systems and Analysis Group and then the Active Optical Systems Group, Choi learned the value of pooling expertise from researchers across the laboratory.

“I was able to shift between a lot of different projects very early on in my career, from radar systems to sensor networks. Because I wasn't an expert at the time in any one of those fields, I learned to reach out to the many different experts at the laboratory,” Choi says.

Choi maintained that mindset through all of her roles at the laboratory, including as head of the Homeland Protection and Air Traffic Control Division, which she led from 2014 and 2019. In that role, she helped bring together diverse technology and human systems expertise to establish the Humanitarian Assistance and Disaster Relief Group. Among other achievements, the group provided support to FEMA and other emergency response agencies after the 2017 hurricane season caused unprecedented flooding and destruction across swaths of Texas, Florida, the Caribbean, and Puerto Rico.

“We were able to rapidly prototype and field multiple technologies to help with the recovery efforts,” Choi says. “It was an amazing example of how we can apply our national security focus to other critical national problems.”

Outside of her technical and advisory achievements, Choi has made an impact at Lincoln Laboratory through her commitments to an inclusive workplace. In 2020, she co-led the study “Preventing Discrimination and Harassment and Promoting an Inclusive Culture at MIT Lincoln Laboratory.” The work was part of a longstanding commitment to supporting colleagues in the workplace through extensive mentoring and participation in employee resource groups.

“I have felt a sense of belonging at the laboratory since the minute I came here, and I’ve had the benefit of support from leaders, mentors, and advocates since then. Improving support systems is very important to me,” says Choi, who will be the first woman to lead Lincoln Laboratory. “Everyone should be able to feel that they belong and can thrive.”

When the Covid-19 pandemic hit, Choi helped the laboratory navigate the disruptions — with its operations deemed essential — which she says taught her a lot about leading through adversity.

“We solve hard problems at the laboratory all the time, but to get thrown into a problem that we had never seen before was a learning experience,” Choi says. “We saw the entire lab come together, from leadership to each of the divisions and departments.”

That synergy has also helped Choi form strategic partnerships within and outside of the laboratory to enhance its mission. Drawing on her knowledge of the laboratory's capabilities and its history of developing impactful systems for NASA and NOAA, Choi recently led the formation of a new Civil Space Systems and Technology Office.

“We were seeing this convergence between Department of Defense and civilian space initiatives, as going to the Moon, Mars, and the cislunar area [between the earth and moon] has become a big emphasis for the entire country generally,” Choi explains. “It seemed like a good time for us to pull those two sides together and grow our NASA portfolio. It gives us a great opportunity to collaborate with MIT centrally, and it ties in with our other strategic directions.”

Building on success

Choi believes her trajectory through the technical ranks of Lincoln Laboratory will help her lead it now.

“That experience gives me a view into what it's like at multiple levels of the laboratory,” Choi says. “I’ve seen what’s worked and what hasn't worked, and I've learned from different perspectives and leadership styles. Strong leaders are crucial, but it’s important to recognize that the bulk of the work gets done by the technical, support, and administrative employees across our divisions, departments, and offices. Remembering being an early staff member helps you understand how hard and exciting the work is, and also how critical those contributions are for our mission.”

Choi says she is also looking forward to expanding the laboratory's collaboration with MIT’s main campus.

“So many areas, from AI to climate to space, have opportunity for us to come together,” Choi says. “We also have some great models of progress, like the Beaver Works Center or the Department of the Air Force – MIT Artificial Intelligence Accelerator program, that we can build from. Everyone here is very excited about doing that, and it will absolutely be a priority for me.”

Ultimately, Choi plans to lead Lincoln Laboratory using the approach that’s proven successful throughout her career.

“I believe very much that I should not be the smartest person in the room, and I rely on the smart people working with me,” Choi says. “I’m part of a team and I work with a team to lead. That has always been my style: Set a vision and goals, and empower and support the people I work with to make decisions and build on that strategy.”


A home away from a homeland

Erica Caple James’ new book examines the rise and struggles of a community organization helping Haitians settle in Boston.


When the Haitian Multi-Service Center opened in the Dorchester neighborhood of Boston in 1978, it quickly became a valued resource. Haitian immigrants likened it to Ellis Island, Plymouth Rock, and Haiti’s own Citadel, a prominent fort. The center, originally located in an old Victorian convent house in St. Leo Parish, provided health care, adult education, counseling, immigration and employment services, and more.

Such services require substantial funding. Before long, Boston’s Cardinal Bernard Francis Law merged the Haitian Multi-Service Center into the Greater Boston Catholic Charities network, whose deeper pockets kept the center intact. Law required that Catholic welfare promote the church’s doctrine. Catholic HIV/AIDS prevention programs started emphasizing only abstinence, not contraception. Meanwhile, the center also received state and federal funding that required grantees to promote medical “best practices” that contrasted with church doctrines.

In short, even while the center served as a community beacon, there were tensions around its funding and function — which in turn reflect bigger tensions about our civic fabric.

“These conflicts are about what the role of government is and where the line is, if there is a line, between public and private, and who ultimately is responsible for the health and well-being of individuals, families, and larger populations,” says MIT scholar Erica Caple James, who has long studied nongovernmental programs.

Now James has written a new book on the subject, “Life at the Center: Haitians and Corporate Catholicism in Boston,” published this spring by the University of California Press and offering a meticulous study of the Haitian Multi-Service Center that illuminates several issues at once.

In it, James, the Professor of Medical Anthropology and Urban Studies in MIT’s Department of Urban Studies and Planning, carefully examines the relationship between the Haitian community, the Catholic Church, and the state, analyzing how the church’s “pastoral power” is exercised and to whose benefit. The book also chronicles the work of the center’s staff, revealing how everyday actions are connected to big-picture matters of power and values. And the book explores larger questions about community, belonging, and finding meaning in work and life — things not unique to Boston’s Haitian Americans but made visible in this study.

Who makes the rules?

Trained as a psychiatric anthropologist, James has studied Haiti since the 1990s; her 2010 book “Democratic Insecurities” examined post-trauma aid programs in Haiti. James was asked to join the Haitian Multi-Service Center’s board in 2005, and served until 2010. She developed the new book as a study of a community in which she was participating.

Over several decades, Boston’s Haitian American population has become one of the city’s most significant immigrant communities. Haitians fleeing violence and insecurity often gained a foothold in the city, especially in the Dorchester and Mattapan neighborhoods as well as some suburbs. The Haitian Multi-Service Center became integral to the lives of many people trying to gain stability and prosperity. And, from residential clergy to those in need of emergency shelter, people were always at the site.

As James writes, the center “literally was a home for many stakeholders, and for others, a home away from a homeland left behind.”

Church support for the center worked partly because many Haitians felt aligned with the church, attending services and Catholic schools; in turn the church provided uniquely substantial support for the Haitian American community.

That also meant some high-profile issues were resolved according to church doctrine. For example, the center’s education efforts about HIV/AIDS transmission did not include contraception, due to the church’s emphasis on abstinence — which many workers considered less effective. Some staff members would even step outside the center to distribute condoms to community members, thus not violating policy.

“We started as a grassroots organization. … Now we have a church making decisions for the community,” said the former director of the center’s HIV/AIDS prevention programming. By 1996, the center’s adult literacy staff resigned en masse over policy differences, with some workers asserting in a 1996 memo that the church “has assumed a proprietary role over our work in the Haitian community.”

Coalition, not consensus

Another policy tension surrounding Catholic charities emerged after same-sex marriage became legal in Massachusetts in 2004. In 2005, a reporter revealed that over the previous 18 years the church had facilitated 13 adoptions of difficult-to-place children with gay couples in the state. After this practice became publicized, the church announced in 2006 that its century of adoption work would end, so as to not violate either church or state laws.

Ultimately, James says, “There are structural dimensions that were baked in, which almost inevitably produced tensions at the institutional or organizational level.”

And yet, as James chronicles attentively, there was hardly consensus about the church’s role in the center. The center’s Haitian American community members were a coalition, not a bloc; some welcomed the church’s presence at the center for spiritual or practical reasons, or both.

“Many Haitians felt there was value from [the center] being independent, but there are others who felt it would be difficult to maintain otherwise,” James says.

Some of the community members even expressed lingering respect for Boston’s Cardinal Law, a central figure of the Catholic Church abuse scandal that emerged in 2002; Law had personally championed the charitable work the church had been performing for Haitians in Boston. In this light, another question emerging from the book, James says, is, “What encourages people to remain loyal to an imperfect institution?”

Keepers of the flame

Some of the people most loyal to the Haitian Multi-Service Center were its staff, whose work James carefully details. Some staff had themselves previously benefitted from the center’s services. The institution’s loyal workers, James writes, served as “keepers of the flame,” understanding its history, building community connections, and extending their own identities through good works for others.

For these kinds of institutions, James notes, “They seem most successful when there is transparency, solidarity, a strong sense of purpose. … It [shows] why we do our jobs and what we do to find meaning.”

“Life at the Center” has generated positive feedback from other scholars. As Linda Barnes, a professor at the Boston University School of Medicine, has stated, “One could read ‘Life at the Center’ multiple times and, with each reading, encounter new dimensions. Erica Caple James's work is exceptional.”

What of the Haitian Multi-Service Center today? In 2006, it was moved and is now housed in Catholic Charities’ Yawkey Center, along with other entities. Some of the workers and community members, James notes in the book, consider the center to have died over the years, compared to its stand-alone self. Others simply consider it transformed. Many have strong feelings, one way or another, about the place that helped orient them as they forged new lives.

As James writes, “It has been difficult to reconcile the intense emotions shared by many of the Center’s stakeholders — confusion, anger, disbelief, and frustration, still expressed with intensity even decades later — alongside reminiscences of love, joy, laughter, and care in rendering service to Haitians and others in need.”

As “Life at the Center” makes clear, that intensity stems from the shared mission many people had, of finding their way in a new and unfamiliar country, in the company of others. And as James writes, in concluding the book, “fulfillment of a mission is never solely about single acts of individuals, but rather the communal striving toward aiding, educating, empowering, and instilling hope in others.”


Owen Coté, military technology expert and longtime associate director of the Security Studies Program, dies at 63

An influential national expert on undersea warfare, Coté is remembered as "the heart and soul of SSP."


Owen Coté PhD ’96, a principal research scientist with the MIT Security Studies Program (SSP), passed away on June 8 after battling cancer. He joined SSP in 1997 as associate director, a role he held for the rest of his life. He guided the program through the course of three directors — each profiting from his wise counsel, leadership skills, and sense of responsibility.

“Owen was an indomitable scholar and leader of the field of security studies,” says M. Taylor Fravel, the Arthur and Ruth Sloan Professor of Political Science and the director of SSP. “Owen was the heart and soul of SSP and a one-of-a-kind scholar, colleague, and friend. He will be greatly missed by us all.”

Having earned his doctorate in political science at MIT, Coté embodied the program’s professional and scholarly values. Through his research and his teaching, he nurtured three of the program’s core interests — the study of nuclear weapons and strategy, the study of the relationship between technological change and military practice, and the application of organization theory to understanding the behavior of military institutions.

He was the author of “The Third Battle: Innovation in the U.S. Navy’s Silent Cold War Struggle with Soviet Submarines,” a book analyzing the sources of the U.S. Navy’s success in its Cold War antisubmarine warfare effort, and a co-author of “Avoiding Nuclear Anarchy: Containing the Threat of Loose Russian Nuclear Weapons and Fissile Material.” He also wrote on the future of naval doctrine, nuclear force structure issues, and the threat of weapons of mass destruction terrorism.

He was an influential national expert on undersea warfare. According to Ford International Professor of Political Science Barry Posen, Coté’s colleague for several decades who served as SSP director from 2006 to 2019, “Owen is credited, among others, with helping the U.S. Navy see the wisdom of transforming four ‘surplus’ Ohio-class ballistic missile submarines into cruise missile platforms that serve the Navy and the country to this day.”

Coté’s principal interest in recent years was maritime “war in three dimensions” — surface, air, and subsurface — and how they interacted and changed with advancing technology. He recently completed a book manuscript on this complex history. At the time of his death, he was also preparing a manuscript that analyzed the sources of innovative military doctrine, using cases that compared U.S. Navy responses to moments in the Cold War when U.S. leaders worried about the vulnerability of land-based missiles to Soviet attack.

“No one in our field was as knowledgeable about military organizations and operations, the politics that drives security policy, and relevant theories of international relations as Owen,” according to Harvey Sapolsky, MIT Professor of Public Policy and Organization, Emeritus, and SSP director from 1989 to 2006. “And no one was more willing to share that knowledge to help others in their work.”

This broad portfolio of expertise served him well as co-editor and ultimately editor of the journal International Security, the longtime flagship journal of the security studies subfield. His colleague and editor-in-chief of International Security Steven Miller reflects that, “Owen combined a brilliant analytic mind, a mischievous sense of humor, and a passion for his work. His contribution to International Security was immense and will be missed, as I relied on his judgement with total confidence.”

Coté believed in sharing his scholarly findings with the policy community. With Cindy Williams, a principal research scientist at SSP, he helped organize and ran a series of national security simulations for military officers and Department of Defense (DoD) civilians in the national security studies program at the Elliott School of International Affairs at George Washington University. He regularly produced major conferences at MIT, with several on the U.S. nuclear attack submarine force perhaps the most influential.

He was passionate about nurturing younger scholars. In recent years, he led programs for visiting fellows at SSP: the Nuclear Security Fellows Program and the Grand Strategy, Security, and Statecraft Fellows Program.

Caitlin Talmage PhD ’11, one of his former students and now an associate professor of political science at MIT, describes Coté as "a devoted mentor and teacher. His classes sparked many dissertations, and he engaged deeply with students and their research, providing detailed feedback, often over steak dinners. Despite his towering expertise in the field of security studies, Owen was always patient, generous, and respectful toward his students. He continued to advise many even after graduation as they launched their careers, myself included. He will be profoundly missed.”

Phil Haun PhD ’10, also one of Coté’s students and now professor and director of the Rosenberg Deterrence Institute at the Naval War College, describes Coté as “a mentor, colleague, and friend to a generation of MIT SSP graduate students,” noting that “arguably his greatest achievement and legacy are the scholars he nurtured and loved.” 

As Haun notes, “Owen’s expertise, with a near encyclopedic knowledge of innovations in military technology, coupled with a gregarious personality and willingness to share his time and talent, attracted dozens of students to join in a journey to study important issues of international security. Owen’s passion for his work and his eagerness to share a meal and a drink with those with similar interests encouraged those around him. The degree to which so many MIT SSP alums have remained connected to the program is testament to the caring community of scholars that Owen helped create.”

Posen describes Coté as a “larger-than-life figure and the most courageous and determined human being I have ever met. He could light up a room when he was among people he liked, and he liked most people. He was in the office suite nearly every day of the week, including weekends, and his door was usually open. Professors, fellows, and graduate students would drop by to seek his counsel on issues of every kind, and it was not uncommon for an expected 10-minute interlude to turn into a one-hour seminar. He had a truly unique ability to understand the interaction of technology and military operations. I have never met anyone who could match him in this ability. He also knew how to really enjoy life. It is an incredible loss on many, many levels.”

As Miller notes, “I got to know Owen while serving as supervisor of his senior thesis at Harvard College in 1981–82. That was the beginning of a lifelong friendship and happily our careers remained entangled for the remainder of his life. I will miss the wonderful, decent human being, the dear friend, the warm and committed colleague. He was a brave soul, suffering much, overcoming much, and contributing much. It is deeply painful to lose such a friend.”

“Owen was kind and generous, and though he endured much, he never complained,” says Sapolsky. “He gave wonderfully organized and insightful talks, improved the writing of others with his editing, and always gave sound advice to those who were wise enough to seek it.”

After graduating from Harvard College in 1982 and before returning to graduate school, Coté worked at the Hudson Institute and the Center for Naval Analyses. He received his PhD in 1996 from MIT, where he specialized in U.S. defense policy and international security affairs.

Before joining SSP in 1997, he served as assistant director of the International Security Program at Harvard's Center for Science and International Affairs (now the Belfer Center). 

He was the son of Ann F. Coté and the late Owen R. Coté Sr. His family wrote in his obituary that at home, he was always up for a good discussion about Star Wars or Harry Potter movies. Motorcycle magazines were a lifelong passion. He was a devoted uncle to his nieces Eliza Coté, Sofia Coté, and Livia Coté, as well as his self-proclaimed “fake” niece and nephew, Sam and Nina Harrison.

In addition to his mother and his nieces, he is survived by his siblings: Mark T. Coté of Blacksburg, Virginia; Peter H. Coté and his wife Nina of Topsfield, Massachusetts; and Suzanne Coté Curtiss and her husband Robin of Cape Neddick, Maine.


What happens during the first moments of butterfly scale formation

New findings could help engineers design materials for light and heat management.


A butterfly’s wing is covered in hundreds of thousands of tiny scales like miniature shingles on a paper-thin roof. A single scale is as small as a speck of dust yet surprisingly complex, with a corrugated surface of ridges that help to wick away water, manage heat, and reflect light to give a butterfly its signature shimmer.

MIT researchers have now captured the initial moments during a butterfly’s metamorphosis, as an individual scale begins to develop this ridged pattern. The researchers used advanced imaging techniques to observe the microscopic features on a developing wing, while the butterfly transformed in its chrysalis.

The team continuously imaged individual scales as they grew out from the wing’s membrane. These images reveal for the first time how a scale’s initially smooth surface begins to wrinkle to form microscopic, parallel undulations. The ripple-like structures eventually grow into finely patterned ridges, which define the functions of an adult scale.

The researchers found that the scale’s transition to a corrugated surface is likely a result of “buckling” — a general mechanism that describes how a smooth surface wrinkles as it grows within a confined space.

“Buckling is an instability, something that we usually don’t want to happen as engineers,” says Mathias Kolle, associate professor of mechanical engineering at MIT. “But in this context, the organism uses buckling to initiate the growth of these intricate, functional structures.”

The team is working to visualize more stages of butterfly wing growth in hopes of revealing clues to how they might design advanced functional materials in the future.

“Given the multifunctionality of butterfly scales, we hope to understand and emulate these processes, with the aim of sustainably designing and fabricating new functional materials. These materials would exhibit tailored optical, thermal, chemical, and mechanical properties for textiles, building surfaces, vehicles — really, for generally any surface that needs to exhibit characteristics that depend on its micro- and nanoscale structure,” Kolle adds.

The team has published their results in a study appearing today in the journal Cell Reports Physical Science. The study’s co-authors include first author and former MIT postdoc Jan Totz, joint first author and postdoc Anthony McDougal, graduate student Leonie Wagner, former postdoc Sungsam Kang, professor of mechanical engineering and biomedical engineering Peter So, professor of mathematics Jörn Dunkel, and professor of material physics and chemistry Bodo Wilts of the University of Salzburg.

A live transformation

In 2021, McDougal, Kolle and their colleagues developed an approach to continuously capture microscopic details of wing growth in a butterfly during its metamorphosis. Their method involved carefully cutting through the insect’s paper-thin chrysalis and peeling away a small square of cuticle to reveal the wing’s growing membrane. They placed a small glass slide over the exposed area, then used a microscope technique developed by team member Peter So to capture continuous images of scales as they grew out of the wing membrane.

They applied the method to observe Vanessa cardui, a butterfly commonly known as a Painted Lady, which the team chose for its scale architecture, which is common to most lepidopteran species. They observed that Painted Lady scales grew along a wing membrane in precise, overlapping rows, like shingles on a rooftop. Those images provided scientists with the most continuous visualization of live butterfly wing scale growth at the microscale to date.

Four images show the butterfly; the butterfly scales; the ridges of a single scale; and an extreme closeup of few ridges.

In their new study, the team used the same approach to focus on a specific time window during scale development, to capture the initial formation of the finely structured ridges that run along a single scale in a living butterfly. Scientists know that these ridges, which run parallel to each other along the length of a single scale, like stripes in a patch of corduroy, enable many of the functions of the wing scales.

Since little is known about how these ridges are formed, the MIT team aimed to record the continuous formation of ridges in a live, developing butterfly, and decipher the organism’s ridge formation mechanisms.

“We watched the wing develop over 10 days, and got thousands of measurements of how the surfaces of scales changed on a single butterfly,” McDougal says. “We could see that early on, the surface is quite flat. As the butterfly grows, the surface begins to pop up a little bit, and then at around 41 percent of development, we see this very regular pattern of completely popped up protoridges. This whole process happens over about five hours and lays the structural foundation for the subsequent expression of patterned ridges."

Pinned down

What might be causing the initial ridges to pop up in precise alignment? The researchers suspected that buckling might be at play. Buckling is a mechanical process by which a material bows in on itself as it is subjected to compressive forces. For instance, an empty soda can buckles when squeezed from the top, down. A material can also buckle as it grows, if it is constrained, or pinned in place.

Scientists have noted that, as the cell membrane of a butterfly’s scale grows, it is effectively pinned in certain places by actin bundles — long filaments that run under the growing membrane and act as a scaffold to support the scale as it takes shape. Scientists have hypothesized that actin bundles constrain a growing membrane, similar to ropes around an inflating hot air balloon. As the butterfly’s wing scale grows, they proposed, it would bulge out between the underlying actin filaments, buckling in a way that forms a scale’s initial, parallel ridges.

To test this idea, the MIT team looked to a theoretical model that describes the general mechanics of buckling. They incorporated image data into the model, such as measurements of a scale membrane’s height at various early stages of development, and various spacings of actin bundles across a growing membrane. They then ran the model forward in time to see whether its underlying principles of mechanical buckling would produce the same ridge patterns that the team observed in the actual butterfly.

“With this modeling, we showed that we could go from a flat surface to a more undulating surface,” Kolle says. “In terms of mechanics, this indicates that buckling of the membrane is very likely what’s initiating the formation of these amazingly ordered ridges.”

“We want to learn from nature, not only how these materials function, but also how they’re formed,” McDougal says. “If you want to for instance make a wrinkled surface, which is useful for a variety of applications, this gives you two really easy knobs to tune, to tailor how those surfaces are wrinkled. You could either change the spacing of where that material is pinned, or you could change the amount of material that you grow between the pinned sections. And we saw that the butterfly is using both of these strategies.”

This research was supported, in part, by the International Human Frontier Science Program Organization, the National Science Foundation, the Humboldt Foundation, and the Alfred P. Sloan Foundation.


Startup aims to transform the power grid with superconducting transmission lines

VEIR, founded by alumnus Tim Heidel, has developed technology that can move more power over long distances, with the same footprint as traditional lines.


Last year in Woburn, Massachusetts, a power line was deployed across a 100-foot stretch of land. Passersby wouldn’t have found much interesting about the installation: The line was supported by standard utility poles, the likes of which most of us have driven by millions of times. In fact, the familiarity of the sight is a key part of the technology’s promise.

The lines are designed to transport five to 10 times the amount of power of conventional transmission lines, using essentially the same footprint and voltage level. That will be key to helping them overcome the regulatory hurdles and community opposition that has made increasing transmission capacity nearly impossible across large swaths of the globe, particularly in America and Europe, where new power distribution systems play a vital role in the shift to renewable energy and the resilience of the grid.

The lines are the product of years of work by the startup VEIR, which was co-founded by Tim Heidel ’05, SM ’06, SM ’09, PhD ’10. They make use of superconducting cables and a proprietary cooling system that will enable initial transmission capacity up to 400 megawatts and, in future versions, up to several gigawatts.

“We can deploy much higher power levels at much lower voltage, and so we can deploy the same high power but with a footprint and visual impact that is far less intrusive, and therefore can overcome a lot of the public opposition as well as siting and permitting barriers,” Heidel says.

VEIR’s solution comes at a time when more than 10,000 renewable energy projects at various stages of development are seeking permission to connect to U.S. grids. The White House has said the U.S. must more than double existing regional transmission capacity in order to reach 2035 decarbonization goals.

All of this comes as electricity demand is skyrocketing amid the increasing use of data centers and AI, and the electrification of everything from passenger vehicles to home heating systems.

Despite those trends, building high-power transmission lines remains stubbornly difficult.

“Building high-power transmission infrastructure can take a decade or more, and there’s been quite a few examples of projects that folks have had to abandon because they realize that there's just so much opposition, or there’s too much complexity to pull it off cost effectively,” Heidel says. “We can drop down in voltage but carry the same amount of power because we can build systems that operate at much higher current levels, and that’s how our lines are able to melt into the background and avoid the same opposition.”

Heidel says VEIR has built a pipeline of interested customers including utilities, data center operators, industrial companies, and renewable energy developers. VEIR is aiming to complete its first commercial-scale pilot carrying high power in 2026.

A career in energy

Over more than a decade at MIT, Heidel went from learning about the fundamentals of electrical engineering to studying the electric grid and the power sector more broadly. That journey included earning a bachelor’s, master’s, and PhD from MIT’s Department of Electrical Engineering and Computer Science as well as a master’s in MIT’s Technology and Policy Program, which he earned while working toward his PhD.

“I got the energy bug and started to focus exclusively on energy and climate in graduate school,” Heidel says.

Following his PhD, Heidel was named research director of MIT’s Future of the Electric Grid study, which was completed in 2011.

“That was a fantastic opportunity at the outset of my career to survey the entire landscape and understand challenges facing the power grid and the power sector more broadly,” Heidel says. “It gave me a good foundation for understanding the grid, how it works, who’s involved, how decisions get made, how expansion works, and it looked out over the next 30 years.”

After leaving MIT, Heidel worked at the Department of Energy’s Advanced Research Projects Agency-Energy (ARPA-E) and then at Bill Gates’ Breakthrough Energy Ventures (BEV) investment firm, where he continued studying transmission.

“Just about every single decarbonization scenario and study that’s been published in the last two decades concludes that to achieve aggressive greenhouse gas emissions reductions, we’re going to have to double or triple the scale of power grids around the world,” Heidel says. “But when we looked at the data on how fast grids were being expanded, the ease with which transmission lines could be built, the cost of building new transmission, just about every indicator was heading in the wrong direction. Transmission was getting more expensive over time and taking longer to build. We desperately need to find a new solution.”

Unlike traditional transmission lines made from steel and aluminum, VEIR’s transmission lines leverage decades of progress in the development of high-temperature superconducting tapes and other materials. Some of that progress has been driven by the nuclear fusion industry, which incorporates superconducting materials into some of their nuclear reactor designs.

But the core innovation at VEIR is the cooling system. VEIR co-founder and advisor Steve Ashworth developed the rough idea for the cooling system more than 15 years ago at Los Alamos National Laboratory as part of a larger Department of Energy-funded research project. When the project was shut down, the idea was largely forgotten.

Heidel and others at Breakthrough Energy Ventures became aware of the innovation in 2019 while researching transmission. Today VEIR’s system is passively cooled with nitrogen, which runs through a vacuum-insulated pipe that surrounds a superconducting cable. Heat exchange units are also used on some transmission towers.

Heidel says transmission lines designed to carry that much power are typically far bigger than VEIR’s design, and other attempts at shrinking the footprint of high-power lines were limited to short distances underground.

“High power requires high voltage, and high voltage requires tall towers and wide right of ways, and those tall towers and those wide right of ways are deeply unpopular,” Heidel says. “That is a universal truth across just about the entire world.”

Moving power around the world

VEIR’s first alternating current (AC) overhead product line is capable of transmission capacities up to 400 megawatts and voltages of up to 69 kilovolts, and the company plans to scale to higher voltage and higher-power products in the future, including direct current (DC) lines.

VEIR will sell its equipment to the companies installing transmission lines, with a primary focus on the U.S. market.

In the longer term, Heidel believes VEIR’s technology is needed as soon as possible to meet rising electricity demands and new renewable energy projects around the globe.