General news from the MIT - Massachusetts Institute of Technology University

Here you find the recent daily general news of the the MIT - Massachusetts Institute of Technology University

MIT News
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
New materials could boost the energy efficiency of microelectronics

By stacking multiple active components based on new materials on the back end of a computer chip, this new approach reduces the amount of energy wasted during computation.


MIT researchers have developed a new fabrication method that could enable the production of more energy efficient electronics by stacking multiple functional components on top of one existing circuit.

In traditional circuits, logic devices that perform computation, like transistors, and memory devices that store data are built as separate components, forcing data to travel back and forth between them, which wastes energy.

This new electronics integration platform allows scientists to fabricate transistors and memory devices in one compact stack on a semiconductor chip. This eliminates much of that wasted energy while boosting the speed of computation.

Key to this advance is a newly developed material with unique properties and a more precise fabrication approach that reduces the number of defects in the material. This allows the researchers to make extremely tiny transistors with built-in memory that can perform faster than state-of-the-art devices while consuming less electricity than similar transistors.

By improving the energy efficiency of electronic devices, this new approach could help reduce the burgeoning electricity consumption of computation, especially for demanding applications like generative AI, deep learning, and computer vision tasks.

“We have to minimize the amount of energy we use for AI and other data-centric computation in the future because it is simply not sustainable. We will need new technology like this integration platform to continue that progress,” says Yanjie Shao, an MIT postdoc and lead author of two papers on these new transistors.

The new technique is described in two papers (one invited) that were presented at the IEEE International Electron Devices Meeting. Shao is joined on the papers by senior authors Jesús del Alamo, the Donner Professor of Engineering in the MIT Department of Electrical Engineering and Computer Science (EECS); Dimitri Antoniadis, the Ray and Maria Stata Professor of Electrical Engineering and Computer Science at MIT; as well as others at MIT, the University of Waterloo, and Samsung Electronics.

Flipping the problem

Standard CMOS (complementary metal-oxide semiconductor) chips traditionally have a front end, where the active components like transistors and capacitors are fabricated, and a back end that includes wires called interconnects and other metal bonds that connect components of the chip.

But some energy is lost when data travel between these bonds, and slight misalignments can hamper performance. Stacking active components would reduce the distance data must travel and improve a chip’s energy efficiency.

Typically, it is difficult to stack silicon transistors on a CMOS chip because the high temperature required to fabricate additional devices on the front end would destroy the existing transistors underneath.

The MIT researchers turned this problem on its head, developing an integration technique to stack active components on the back end of the chip instead.

“If we can use this back-end platform to put in additional active layers of transistors, not just interconnects, that would make the integration density of the chip much higher and improve its energy efficiency,” Shao explains.

The researchers accomplished this using a new material, amorphous indium oxide, as the active channel layer of their back-end transistor. The active channel layer is where the transistor’s essential functions take place.

Due to the unique properties of indium oxide, they can “grow” an extremely thin layer of this material at a temperature of only about 150 degrees Celsius on the back end of an existing circuit without damaging the device on the front end.

Perfecting the process

They carefully optimized the fabrication process, which minimizes the number of defects in a layer of indium oxide material that is only about 2 nanometers thick.

A few defects, known as oxygen vacancies, are necessary for the transistor to switch on, but with too many defects it won’t work properly. This optimized fabrication process allows the researchers to produce an extremely tiny transistor that operates rapidly and cleanly, eliminating much of the additional energy required to switch a transistor between off and on.

Building on this approach, they also fabricated back-end transistors with integrated memory that are only about 20 nanometers in size. To do this, they added a layer of material called ferroelectric hafnium-zirconium-oxide as the memory component.

These compact memory transistors demonstrated switching speeds of only 10 nanoseconds, hitting the limit of the team’s measurement instruments. This switching also requires much lower voltage than similar devices, reducing electricity consumption.

And because the memory transistors are so tiny, the researchers can use them as a platform to study the fundamental physics of individual units of ferroelectric hafnium-zirconium-oxide.

“If we can better understand the physics, we can use this material for many new applications. The energy it uses is very minimal, and it gives us a lot of flexibility in how we can design devices. It really could open up many new avenues for the future,” Shao says.

The researchers also worked with a team at the University of Waterloo to develop a model of the performance of the back-end transistors, which is an important step before the devices can be integrated into larger circuits and electronic systems.

In the future, they want to build upon these demonstrations by integrating back-end memory transistors onto a single circuit. They also want to enhance the performance of the transistors and study how to more finely control the properties of ferroelectric hafnium-zirconium-oxide.

“Now, we can build a platform of versatile electronics on the back end of a chip that enable us to achieve high energy efficiency and many different functionalities in very small devices. We have a good device architecture and material to work with, but we need to keep innovating to uncover the ultimate performance limits,” Shao says.

This work is supported, in part, by Semiconductor Research Corporation (SRC) and Intel. Fabrication was carried out at the MIT Microsystems Technology Laboratories and MIT.nano facilities. 


PKG Center and the MIT Club of Princeton collaborate on food insecurity hackathon

The PKG Center is commemorating 25 years of the IDEAS Social Innovation Challenge with regional student-alumni hackathons for social impact.


On Nov. 8, the MIT Priscilla King Gray Public Service Center (MIT PKG Center) collaborated with the MIT Club of Princeton, New Jersey, and the Trenton Area Soup Kitchen (TASK) to prototype tech-driven interventions to the growing challenge of food insecurity in the Trenton, New Jersey region.  

Twelve undergraduates traveled to Trenton for a one-day social impact hackathon, working in teams with alumni active in the MIT Club of Princeton to address technical challenges posed by TASK. These included predicting the number of daily meals based on historical data for an organization serving over 12,000 meals each week, and gathering real-time feedback from hundreds of patrons with limited access to technology. 

The day culminated in a pitch session judged by MIT alumni and TASK leadership. The winning solution, developed by a cross-generational team of MIT alumni and students, addressed one of TASK’s most pressing challenges with a blend of technical ingenuity and human-centered design. Drawing on TASK datasets and external data such as weather and holidays, the team proposed a predictive dashboard that impressed judges with its practical utility, enabling the kitchen to reduce waste and distribute the appropriate number of meals to varied locations. TASK also appreciated several elements of solutions proposed to gather real-time feedback from patrons, and plans to experiment with them. 

“The last few weeks have shown how quickly the need for food can escalate in a place like Trenton, where so many people are living below or close to the federal poverty line,” says TASK CEO Amy Flynn. “The issues we are facing are complex and unprecedented, and the hackathon was an opportunity to think about our challenges, and their solutions, in modern and innovative ways. TASK is very excited to be partnering with MIT, the PKG Center for Social Impact, and the local MIT Club of Princeton for this event, particularly at this critical time.”

Students will implement the winning intervention through the PKG Center’s Social Impact Internship Program during MIT’s Independent Activities Period (IAP) in January 2026. Alumni from the MIT Club of Princeton will also serve as mentors to students during their internship. 

Alumni connections

The PKG Center recently completed a new strategic plan, and heard through the process that alumni and students passionate about making a positive impact want more opportunities to interact with and learn from each other.

“A hackathon seemed like an ideal way to connect students and alumni, generating mentoring relationships while making a tangible impact,” says Alison Badgett, associate dean and director of the PKG Center. “We’re grateful to the MIT Club of Princeton and the Trenton Area Soup Kitchen for enabling us to pilot what we hope will be a regular event.”

The idea for a regional hackathon came from the Friends of the PKG Center, the center’s alumni advisory board, which grew 25 percent this year with the addition of several young alumni. Princeton-based alumni Eberhard Wunderlich SM ’75, PhD ’78 and Shahla Wunderlich PhD ’78 offered to help make the idea a reality by connecting PKG with local partners. 

"We have been longtime friends of the PKG Center and have observed over the years that MIT students are uniquely positioned to make a real impact. We were eager to connect the PKG Center with the MIT Club of Princeton and TASK because we knew this collaboration would be meaningful not only for students, alumni, and families, but also for many people in need within our community," said the Wunderlichs. “It was a wonderful experience working with such talented students. We were happy to participate and look forward to the project enhancing the operation of TASK, which provides meals and develops skills for independence for those in need in Mercer County, New Jersey.”

A legacy of innovation and impact

The hackathon was facilitated by Lauren Tyger, the PKG Center’s assistant dean for social innovation, who leads a growing suite of social innovation and entrepreneurship programming for the PKG Center. Tyger recruited the 12 undergraduate participants from PKG’s Social Innovation Exploration first-year pre-orientation program (FPOP), an intensive five-day hackathon exploring food insecurity through the lens of sustainability at MIT and in Cambridge, Massachusetts. 

“For students, the regional alumni-student hackathon was an opportunity to implement what they learned through PKG’s FPOP to a real-world challenge with TASK,” says Tyger. “We hope students will not only be inspired to implement their winning interventions through an IAP internship, but also to explore social enterprise solutions to food insecurity through our IDEAS Social Innovation Incubator, now in its 25th year.”

With the success of this event, the PKG Center is exploring opportunities to host more alumni-student hackathons with regional MIT clubs, as a way to celebrate the 25th anniversary of the IDEAS Social Innovation Challenge, which has invested $1.3 million in nearly 300 social enterprises since its inception in 2001. 

“Getting to work with TASK was amazing because it allowed me to put the skills I learned in PKG’s SIE FPOP to a real-world application that could help people,” says Vivian Dinh, a student who participated in the hackathon. “It was a great feeling to put together things that we learned in SIE like ideation strategies, interviewing skills, and prototyping into a product, and then see that TASK truly believed in our ideas. Overall, it was a very empowering experience, knowing that my skills and ideas could help a community.”


MIT study shows how vision can be rebooted in adults with amblyopia

Temporarily anesthetizing the retina briefly reverts the activity of the visual system to that observed in early development and enables growth of responses to the amblyopic (“lazy”) eye.


In the vision disorder amblyopia (commonly known as “lazy eye”), impaired vision in one eye during development causes neural connections in the brain’s visual system to shift toward supporting the other eye, leaving the amblyopic eye less capable even after the original impairment is corrected. Current interventions are only effective during infancy and early childhood, while the neural connections are still being formed. 

Now a study in mice by neuroscientists in The Picower Institute for Learning and Memory at MIT shows that if the retina of the amblyopic eye is temporarily and reversibly anesthetized just for a couple of days, the brain’s visual response to the eye can be restored, even in adulthood.

The open-access findings, published Nov. 25 in Cell Reports, may improve the clinical potential of the idea of temporarily anesthetizing a retina to restore the strength of the amblyopic eye’s neural connections. 

In 2021, the lab of Picower Professor Mark Bear and collaborators showed that anesthetizing the non-amblyopic eye could improve vision in the amblyopic one — an approach analogous in that way to the treatment used in childhood of patching the unimpaired eye. Those 2021 findings have now been replicated in adults of multiple species. But the new evidence on how inactivation works suggests that the proposed treatment also could be effective when applied directly to the amblyopic eye, Bear says, though a key next step will be to again show that it works in additional species and, ultimately, people.

“If it does, it’s a pretty substantial step forward, because it would be reassuring to know that vision in the good eye would not have to be interrupted by treatment,” says Bear, a faculty member in MIT’s Department of Brain and Cognitive Sciences. “The amblyopic eye, which is not doing much, could be inactivated and ‘brought back to life’ instead. Still, I think that especially with any invasive treatment, it’s extremely important to confirm the results in higher species with visual systems closer to our own.”

Madison Echavarri-Leet PhD ’25, whose doctoral thesis included this research, is the lead author of the study, which also demonstrates the underlying process in the brain that makes the potential treatment work.

A beneficial burst

Bear’s lab has been studying the science underlying amblyopia for decades, for instance by working to understand the molecular mechanisms that enable neural circuits to change their connections in response to visual experience or deprivation. The research has produced ideas about how to address amblyopia in adulthood. In a 2016 study with collaborators at Dalhousie University, they showed that temporarily anesthetizing both retinas could restore vision loss in amblyopia. Then, five years later, they published the study showing that anesthetizing just the non-amblyopic eye produced visual recovery for the amblyopic eye.

Throughout that time, the lab weighed multiple hypotheses to explain how retinal inactivation works its magic. Lingering in the lab’s archive of results, Bear says, was an unexplored finding in the lateral geniculate nucleus (LGN) that relays information from the eyes to the visual cortex, where vision is processed: back in 2008, they had found that blocking inputs from a retina to neurons in the LGN caused those neurons to fire synchronous “bursts” of electrical signals to downstream neurons in the visual cortex. Similar patterns of activity occur in the visual system before birth and guide early synaptic development.

The new study tested whether those bursts might have a role in the potential amblyopia treatments the lab was reporting. To get started, Leet and Bear’s team used a single injection of tetrodotoxin (TTX) to anesthetize retinas in the lab animals. They found that the bursting occurred not only in LGN neurons that received input from the anesthetized eye, but also in LGN neurons that received input from the unaffected eye.

From there, they showed that the bursting response depended on a particular “T-type” channel for calcium in the LGN neurons. This was important, because knowing this gave the scientists a way to turn it off. Once they gained that ability, then they could test whether doing so prevented TTX from having a therapeutic effect in mice with amblyopia.

Sure enough, when the researchers genetically knocked out the channels and disrupted the bursting, they found that anesthetizing the non-amblyopic eye could no longer help amblyopic mice. That showed the bursting is necessary for the treatment to work.

Aiding amblyopia

Given their finding that bursting occurs when either retina is anesthetized, the scientists hypothesized it might be enough to just do it in the amblyopic eye. To test this, they ran an experiment in which some mice modeling amblyopia received TTX in their amblyopic eye and some did not. The injection took the retina offline for two days. After a week, the scientists then measured activity in neurons in the visual cortex to calculate a ratio of input from each eye. They found that the ratio was much more even in mice that received the treatment versus those left untreated, indicating that after the amblyopic eye was anesthetized, its input in the brain rose to be at parity with input from the non-amblyopic one.

Further testing is needed, Bear notes, but the team wrote in the study that the results were encouraging.

“We are cautiously optimistic that these findings may lead to a new treatment approach for human amblyopia, particularly given the discovery that silencing the amblyopic eye is effective,” the scientists wrote.

In addition to Leet and Bear, the paper’s authors are Tushar Chauhan, Teresa Cramer, and Ming-fai Fong.

The National Institutes of Health, the Swiss National Science Foundation, the Severin Hacker Vision Research Fund, and the Freedom Together Foundation supported the study.


Vine-inspired robotic gripper gently lifts heavy and fragile objects

The new design could be adapted to assist the elderly, sort warehouse products, or unload heavy cargo.


In the horticultural world, some vines are especially grabby. As they grow, the woody tendrils can wrap around obstacles with enough force to pull down entire fences and trees.

Inspired by vines’ twisty tenacity, engineers at MIT and Stanford University have developed a robotic gripper that can snake around and lift a variety of objects, including a glass vase and a watermelon, offering a gentler approach compared to conventional gripper designs. A larger version of the robo-tendrils can also safely lift a human out of bed.

The new bot consists of a pressurized box, positioned near the target object, from which long, vine-like tubes inflate and grow, like socks being turned inside out. As they extend, the vines twist and coil around the object before continuing back toward the box, where they are automatically clamped in place and mechanically wound back up to gently lift the object in a soft, sling-like grasp.

The researchers demonstrated that the vine robot can safely and stably lift a variety of heavy and fragile objects. The robot can also squeeze through tight quarters and push through clutter to reach and grasp a desired object.

The team envisions that this type of robot gripper could be used in a wide range of scenarios, from agricultural harvesting to loading and unloading heavy cargo. In the near term, the group is exploring applications in eldercare settings, where soft inflatable robotic vines could help to gently lift a person out of bed.

“Transferring a person out of bed is one of the most physically strenuous tasks that a caregiver carries out,” says Kentaro Barhydt, a PhD candidate in MIT’s Department of Mechanical Engineering. “This kind of robot can help relieve the caretaker, and can be gentler and more comfortable for the patient.”

Barhydt, along with his co-first author from Stanford, O. Godson Osele, and their colleagues, present the new robotic design today in the journal Science Advances. The study’s co-authors are Harry Asada, the Ford Professor of Engineering at MIT, and Allison Okamura, the Richard W. Weiland Professor of Engineering at Stanford University, along with Sreela Kodali and Cosmia du Pasquier at Stanford University, and former MIT graduate student Chase Hartquist, now at the University of Florida, Gainesville.

Open and closed

Three photos with overlayed arrows show the direction the vines as it picks up a glass vase.


The team’s Stanford collaborators, led by Okamura, pioneered the development of soft, vine-inspired robots that grow outward from their tips. These designs are largely built from thin yet sturdy pneumatic tubes that grow and inflate with controlled air pressure. As they grow, the tubes can twist, bend, and snake their way through the environment, and squeeze through tight and cluttered spaces.

Researchers have mostly explored vine robots for use in safety inspections and search and rescue operations. But at MIT, Barhydt and Asada, whose group has developed robotic aides for the elderly, wondered whether such vine-inspired robots could address certain challenges in eldercare — specifically, the challenge of safely lifting a person out of bed. Often in nursing and rehabilitation settings, this transfer process is done with a patient lift, operated by a caretaker who must first physically move a patient onto their side, then back onto a hammock-like sheet. The caretaker straps the sheet around the patient and hooks it onto the mechanical lift, which then can gently hoist the patient out of bed, similar to suspending a hammock or sling.

The MIT and Stanford team imagined that as an alternative, a vine-like robot could gently snake under and around a patient to create its own sort of sling, without a caretaker having to physically maneuver the patient. But in order to lift the sling, the researchers realized they would have to add an element that was missing in existing vine robot designs: Essentially, they would have to close the loop.

Most vine-inspired robots are designed as “open-loop” systems, meaning they act as open-ended strings that can extend and bend in different configurations, but they are not designed to secure themselves to anything to form a closed loop. If a vine robot could be made to transform from an open loop to a closed loop, Barhydt surmised that it could make itself into a sling around the object and pull itself up, along with whatever, or whomever, it might hold.

For their new study, Barhydt, Osele, and their colleagues outline the design for a new vine-inspired robotic gripper that combines both open- and closed-loop actions. In an open-loop configuration, a robotic vine can grow and twist around an object to create a firm grasp. It can even burrow under a human lying on a bed. Once a grasp is made, the vine can continue to grow back toward and attach to its source, creating a closed loop that can then be retracted to retrieve the object.

“People might assume that in order to grab something, you just reach out and grab it,” Barhydt says. “But there are different stages, such as positioning and holding. By transforming between open and closed loops, we can achieve new levels of performance by leveraging the advantages of both forms for their respective stages.”

Gentle suspension

As a demonstration of their new open- and closed-loop concept, the team built a large-scale robotic system designed to safely lift a person up from a bed. The system comprises a set of pressurized boxes attached on either end of an overhead bar. An air pump inside the boxes slowly inflates and unfurls thin vine-like tubes that extend down toward the head and foot of a bed. The air pressure can be controlled to gently work the tubes under and around a person, before stretching back up to their respective boxes. The vines then thread through a clamping mechanism that secures the vines to each box. A winch winds the vines back up toward the boxes, gently lifting the person up in the process.

“Heavy but fragile objects, such as a human body, are difficult to grasp with the robotic hands that are available today,” Asada says. “We have developed a vine-like, growing robot gripper that can wrap around an object and suspend it gently and securely.”

"There’s an entire design space we hope this work inspires our colleagues to continue to explore,” says co-lead author Osele. “I especially look forward to the implications for patient transfer applications in health care.”

“I am very excited about future work to use robots like these for physically assisting people with mobility challenges,” adds co-author Okamura. “Soft robots can be relatively safe, low-cost, and optimally designed for specific human needs, in contrast to other approaches like humanoid robots.”

While the team’s design was motivated by challenges in eldercare, the researchers realized the new design could also be adapted to perform other grasping tasks. In addition to their large-scale system, they have built a smaller version that can attach to a commercial robotic arm. With this version, the team has shown that the vine robot can grasp and lift a variety of heavy and fragile objects, including a watermelon, a glass vase, a kettle bell, a stack of metal rods, and a playground ball. The vines can also snake through a cluttered bin to pull out a desired object.

“We think this kind of robot design can be adapted to many applications,” Barhydt says. “We are also thinking about applying this to heavy industry, and things like automating the operation of cranes at ports and warehouses.”

This work was supported, in part, by the National Science Foundation and the Ford Foundation.


When it comes to language, context matters

MIT researchers identified three cognitive skills that we use to infer what someone really means.


In everyday conversation, it’s critical to understand not just the words that are spoken, but the context in which they are said. If it’s pouring rain and someone remarks on the “lovely weather,” you won’t understand their meaning unless you realize that they’re being sarcastic.

Making inferences about what someone really means when it doesn’t match the literal meaning of their words is a skill known as pragmatic language ability. This includes not only interpreting sarcasm but also understanding metaphors and white lies, among many other conversational subtleties.

“Pragmatics is trying to reason about why somebody might say something, and what is the message they’re trying to convey given that they put it in this particular way,” says Evelina Fedorenko, an MIT associate professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research.

New research from Fedorenko and her colleagues has revealed that these abilities can be grouped together based on what types of inferences they require. In a study of 800 people, the researchers identified three clusters of pragmatic skills that are based on the same kinds of inferences and may have similar underlying neural processes.

One of these clusters includes inferences that are based on our knowledge of social conventions and rules. Another depends on knowledge of how the physical world works, while the last requires the ability to interpret differences in tone, which can indicate emphasis or emotion.

Fedorenko and Edward Gibson, an MIT professor of brain and cognitive sciences, are the senior authors of the study, which appears today in the Proceedings of the National Academy of Sciences. The paper’s lead authors are Sammy Floyd, a former MIT postdoc who is now an assistant professor of psychology at Sarah Lawrence College, and Olessia Jouravlev, a former MIT postdoc who is now an associate professor of cognitive science at Carleton University.

The importance of context

Much past research on how people understand language has focused on processing the literal meanings of words and how they fit together. To really understand what someone is saying, however, we need to interpret those meanings based on context.

“Language is about getting meanings across, and that often requires taking into account many different kinds of information — such as the social context, the visual context, or the present topic of the conversation,” Fedorenko says.

As one example, the phrase “people are leaving” can mean different things depending on the context, Gibson points out. If it’s late at night and someone asks you how a party is going, you may say “people are leaving,” to convey that the party is ending and everyone’s going home.

“However, if it’s early, and I say ‘people are leaving,’ then the implication is that the party isn’t very good,” Gibson says. “When you say a sentence, there’s a literal meaning to it, but how you interpret that literal meaning depends on the context.”

About 10 years ago, with support from the Simons Center for the Social Brain at MIT, Fedorenko and Gibson decided to explore whether it might be possible to precisely distinguish the types of processing that go into pragmatic language skills.

One way that neuroscientists can approach a question like this is to use functional magnetic resonance imaging (fMRI) to scan the brains of participants as they perform different tasks. This allows them to link brain activity in different locations to different functions. However, the tasks that the researchers designed for this study didn’t easily lend themselves to being performed in a scanner, so they took an alternative approach.

This approach, known as “individual differences,” involves studying a large number of people as they perform a variety of tasks. This technique allows researchers to determine whether the same underlying brain processes may be responsible for performance on different tasks.

To do this, the researchers evaluate whether each participant tends to perform similarly on certain groups of tasks. For example, some people might perform well on tasks that require an understanding of social conventions, such as interpreting indirect requests and irony. The same people might do only so-so on tasks that require understanding how the physical world works, and poorly on tasks that require distinguishing meanings based on changes in intonation — the melody of speech. This would suggest that separate brain processes are being recruited for each set of tasks.

The first phase of the study was led by Jouravlev, who assembled existing tasks that require pragmatic skills and created many more, for a total of 20. These included tasks that require people to understand humor and sarcasm, as well as tasks where changes in intonation can affect the meaning of a sentence. For example, someone who says “I wanted blue and black socks,” with emphasis on the word “black,” is implying that the black socks were forgotten.

“People really find ways to communicate creatively and indirectly and non-literally, and this battery of tasks captures that,” Floyd says.

Components of pragmatic ability

The researchers recruited study participants from an online crowdsourcing platform to perform the tasks, which took about eight hours to complete. From this first set of 400 participants, the researchers found that the tasks formed three clusters, related to social context, general knowledge of the world, and intonation. To test the robustness of the findings, the researchers continued the study with another set of 400 participants, with this second half run by Floyd after Jouravlev had left MIT.

With the second set of participants, the researchers found that tasks clustered into the same three groups. They also confirmed that differences in general intelligence, or in auditory processing ability (which is important for the processing of intonation), did not affect the outcomes that they observed.

In future work, the researchers hope to use brain imaging to explore whether the pragmatic components they identified are correlated with activity in different brain regions. Previous work has found that brain imaging often mirrors the distinctions identified in individual difference studies, but can also help link the relevant abilities to specific neural systems, such as the core language system or the theory of mind system.

This set of tests could also be used to study people with autism, who sometimes have difficulty understanding certain social cues. Such studies could determine more precisely the nature and extent of these difficulties. Another possibility could be studying people who were raised in different cultures, which may have different norms around speaking directly or indirectly.

“In Russian, which happens to be my native language, people are more direct. So perhaps there might be some differences in how native speakers of Russian process indirect requests compared to speakers of English,” Jouravlev says.

The research was funded by the Simons Center for the Social Brain at MIT, the National Institutes of Health, and the National Science Foundation. 


MIT takes manufacturing education across the country

The new TechAMP program teaches production principles to workers, helping them advance their careers and identify savings at their firms.


MIT has long bolstered U.S. manufacturing by developing key innovations and production technologies, and training entrepreneurs. This fall, the Institute introduced a new tool for U.S. manufacturing: an education program for workers, held at collaborating institutions, which teaches core principles of production, helping employees and firms alike.

The new effort, the Technologist Advanced Manufacturing Program, or TechAMP, developed with U.S. Department of Defense funding, features a mix of in-person lab instruction at participating institutions, online lectures by MIT faculty and staff, and interactive simulations. There are also capstone projects, in which employees study manufacturing issues with the aim of saving their firms money.

Ultimately, TechAMP is a 12-month certificate program aimed at making the concept of the accredited “technologist” a vital part of the manufacturing enterprise. That could help workers advance in their careers. And it could help firms develop a more skilled workforce.

“We think there’s a gap between the traditional worker categories of engineer and technician, and this technologist training fills it,” says John Liu, a principal research scientist in MIT’s Department of Mechanical Engineering and co-principal investigator of the TechAMP program. “We’re very interested in creating new career pathways and allowing the manufacturing workforce to have a different kind of perspective. We want to formalize the path to becoming a technologist.”

Liu, who is also the principal investigator of the MIT Learning Engineering and Practice Group (LEAP), adds that the MIT program “is a pathway to leadership. No longer should a technician just think about one piece of equipment. They can think about the whole system, the whole operation, and help with decision-making.”

TechAMP launched this fall, in collaboration with multiple institutions, including the University of Massachusetts at Lowell, Cape Cod Community College, Ohio State University, the Community College of Rhode Island, the Connecticut Center for Advanced Technology, and the Berkshire Innovation Center in Pittsfield, Massachusetts. More than 70 people are in the initial cohort of students.

“MIT has embraced the idea that we’re reaching this new type of learner,” says Julie Diop, executive director of MIT’s Initiative for New Manufacturing (INM). TechAMP forms a key part of the education arm of that initiative, a campus-wide effort to reinvigorate U.S. manufacturing that was announced in May 2025. INM also collaborates with several industry firms embracing innovative approaches to manufacturing.

“Through TechAMP and other programs, we’re excited to reach beyond MIT’s traditional realm of manufacturing education and collaborate with companies of all sizes alongside our community college partners,” says John Hart, the Class of 1922 Professor of Mechanical Engineering, head of the Department of Mechanical Engineering at MIT, and faculty co-director of INM. “We hope that the program equips manufacturing technologists to be innovators and problem-solvers in their organizations, and to effectively deploy new technologies that can improve manufacturing productivity.”

INM is one of the key Institute-wide initiatives prioritized by MIT President Sally A. Kornbluth.

“Helping America build a future of new manufacturing is a perfect job for MIT,” Kornbluth said at the INM launch event in May. She continued: “I’m convinced that there is no more important work we can do to meet the moment and serve the nation now.”

A “confidence booster” for workers

TechAMP has been supported by two Department of Defense grants enabling the program’s development. MIT scholars collaborated with colleagues at Clemson University and Ohio State University to develop a number of the interactive simulations used in the course.

The course work is based around a “hub-and-spoke” model that includes segments on core principles of manufacturing — that’s the hub — as well as six areas, or spokes, where companies have advised MIT that workers need more training.

The four parts of the hub comprise manufacturing process controls and their statistical analysis; understanding manufacturing systems, including workflow and efficiency; leadership skills; and operations management, from factory analysis to supply chain issues. These are also the core issues studied in MIT’s online micromaster’s certificate in manufacturing.

The six spokes may change or expand over time but currently consist of mechatronics, automation programming, robotics, machining, digital manufacturing, and design and manufacturing fundamentals.

Having the TechAMP curriculum revolve around concepts common to all manufacturing industries helps technologists-in-training better understand how their companies are trying to function and how their own work relates to those principles.

“The hub concepts are what defines manufacturing,” Liu says. “We need to teach this undervalued set of principles to the workforce, including people without university degrees. If we do that, it means they have a timeless set of ideas. We can adapt ourselves to add industries like biomanufacturing, but we’re starting with the fundamentals.”

Students say they are enjoying the program.

“It’s been a confidence booster,” says Nicole Swan, an employee at the manufacturing firm Proterial, who is taking the TechAMP class at the Community College of Rhode Island campus in Westerly, Rhode Island. “This has really shown me so many different opportunities [for] what I could do in the future, and different avenues that are available.”

Direct value capture possible for firms

The TechAMP certificate program also involves a capstone project, in which the students try to analyze issues or challenges within their own firms. Ideally, if those projects lead to savings or add value, that could make it well worthwhile for manufacturing companies to pay for their students to attend the TechAMP program — which is about 10 to 14 hours of work per week, for the year.

“That could be a form of impact — direct value capture for the firm,” Diop says.

Some firms are already pleased with the development of TechAMP.

“There are so many manufacturing jobs that don’t need a four-year degree, but do require a very high skill level and good communications skills,” says Michael Trotta, CEO of Crystal Engineering, a versatile, 45-employee manufacturer in Newburyport, Massachusetts, whose products range from medical devices to aerospace and defense items. “I see TechAMP as a next logical step in developing a sustainable workforce."

Trotta and three of his employees worked with MIT on the TechAMP project last spring, studying the curriculum material and providing feedback about it to the program leaders, in an effort to make the coursework as useful as possible.

"What we want workers to do is progress to a point where they become that technologist making not $20 an hour, but $40 or $50 an hour, because they have that skill set to run a lot more than just one piece of the process,” Trotta explains. “They’re able to communicate effectively with the engineers, with operations, to identify strengths and weaknesses, to help the firm drive success."

And while the position of “technologist” may not yet be in every manufacturer’s vocabulary yet, the MIT program leaders think it makes eminent sense, as a way of further equipping workers who are currently regarded as technicians or machinists.

By analogy, Diop observes, “The role of nurse practitioner bridges the gap between nurse and doctor, and has changed how medicine is delivered.” Manufacturing, she adds, “has had a reputation for dead-end jobs, but if MIT can help break that image by providing a real pathway, I think that would be meaningful, especially for those without university degrees.”

Intriguingly — as shown by research from Ben Armstrong, executive director and a research scientist at MIT’s Industrial Performance Center — about 10 to 15 percent of titled engineers in manufacturing industries do not have engineering degrees, either. For that portion of the workforce as well, more formal training and credentials may prove useful over time.

TechAMP is new, evolving — and likely to be expanding soon. Diop and Liu are in talks with interested education networks in multiple manufacturing-heavy states, to see if they would like to partner with MIT. There is also new interest from more manufacturers, including some of the partners in MIT’s Initiative for New Manufacturing. Given that the initiative just launched in May, TechAMP has hit the ground running.

“There’s been a lot of excitement so far, we think,” Liu says. “And it’s coming from organizations and people who are eager to learn more.”  


Jennifer Lewis ScD ’91: “Can we make tissues that are made from you, for you?”

In the 2025 Dresselhaus Lecture, the materials scientist describes her work 3D printing soft materials ranging from robots to human tissues.


“Can we make tissues that are made from you, for you?” asked Jennifer Lewis ScD ’91 at the 2025 Mildred S. Dresselhaus Lecture, organized by MIT.nano, on Nov. 3. “The grand challenge goal is to create these tissues for therapeutic use and, ultimately, at the whole organ scale.”

Lewis, the Hansjörg Wyss Professor of Biologically Inspired Engineering at Harvard University, is pursuing that challenge through advances in 3D printing. In her talk presented to a combined in-person and virtual audience of over 500 attendees, Lewis shared work from her lab that focuses on enhanced function in 3D printed components for use in soft electronics, robotics, and life sciences.

“How you make a material affects its structure, and it affects its properties,” said Lewis. “This perspective was a light bulb moment for me, to think about 3D printing beyond just prototyping and making shapes, but really being able to control local composition, structure, and properties across multiple scales.”

A trained materials scientist, Lewis reflected on learning to speak the language of biologists when she joined Harvard to start her own lab focused on bioprinting and biological engineering. How does one compare particles and polymers to stem cells and extracellular matrices? A key commonality, she explained, is the need for a material that can be embedded and then erased, leaving behind open channels. To meet this need, Lewis’ lab developed new 3D printing methods, sophisticated printhead designs, and viscoelastic inks — meaning the ink can go back and forth between liquid and solid form.

Displaying a video of a moving robot octopus named Octobot, Lewis showed how her group engineered two sacrificial inks that change from fluid to solid upon either warming or cooling. The concept draws inspiration from nature — plants that dynamically change in response to touch, light, heat, and hydration. For Octobot, Lewis’ team used sacrificial ink and an embedded printing process that enables free-form printing in three dimensions, rather than layer-by-layer, to create a fully soft autonomous robot. An oscillating circuit in the center guides the fuel (hydrogen peroxide), making the arms move up and down as they inflate and deflate.

From robots to whole organ engineering

“How can we leverage shape morphing in tissue engineering?” asked Lewis. “Just like our blood continuously flows through our body, we could have continuous supply of healing.”

Lewis’ lab is now working on building human tissues, primarily cardiac, kidney, and cerebral tissue, using patient-specific cells. The motivation, Lewis explained, is not only the need for human organs for people with diseases, but the fact that receiving a donated organ means taking immunosuppressants the rest of your life. If, instead, the tissue could be made from your own cells, it would be a stronger match to your own body.

“Just like we did to engineer viscoelastic matrices for embedded printing of functional and structural materials,” said Lewis, “we can take stem cells and then use our sacrificial writing method to write in perfusable vasculature.” The process uses a technique Lewis calls SWIFT — sacrificial writing into functional tissue. Sharing lab results, Lewis showed how the stem cells, differentiated into cardiac building blocks, are initially beating individually, but after being packed into a tighter space that will support SWIFT, these building blocks fuse together and become one tissue that beats synchronously. Then, her team uses a gelatin ink that solidifies or liquefies with temperature changes to print the complex design of human vessels, flushing away the ink to leave behind open lumens. The channel remains open, mimicking a blood vessel network that could have fluid actively, continuously flowing through it. “Where we’re going is to expand this not only to different tissue types, but also building in mechanisms by which we can build multi-scale vasculature,” said Lewis.

Honoring Mildred S. Dresselhaus

In closing, Lewis reflected on Dresselhaus’ positive impact on her own career. “I want to dedicate this [talk] to Millie Dresselhaus,” said Lewis. She pointed to a quote by Millie: “The best thing about having a lady professor on campus is that it tells women students that they can do it, too.” Lewis, who arrived at MIT as a materials science and engineering graduate student in the late 1980s, a time when there were very few women with engineering doctorates, noted that “just seeing someone of her stature was really an inspiration for me. I thank her very much for all that she’s done, for her amazing inspiration both as a student, as a faculty member, and even now, today.”

After the lecture, Lewis was joined by Ritu Raman, the Eugene Bell Career Development Assistant Professor of Tissue Engineering in the MIT Department of Mechanical Engineering, for a question-and-answer session. Their discussion included ideas on 3D printing hardware and software, tissue repair and regeneration, and bioprinting in space. 

“Both Mildred Dresselhaus and Jennifer Lewis have made incredible contributions to science and served as inspiring role models to many in the MIT community and beyond, including myself,” said Raman. “In my own career as a tissue engineer, the tools and techniques developed by Professor Lewis and her team have critically informed and enabled the research my lab is pursuing.”

This was the seventh Dresselhaus Lecture, named in honor of the late MIT Institute Professor Mildred Dresselhaus, known to many as the "Queen of Carbon Science.” The annual event honors a significant figure in science and engineering from anywhere in the world whose leadership and impact echo Dresselhaus’ life, accomplishments, and values. 

“Professor Lewis exemplifies, in so many ways, the spirit of Millie Dresselhaus,” said MIT.nano Director Vladimir Bulović. “Millie’s groundbreaking work, indeed, is well known; and the groundbreaking work of Professor Lewis in 3D printing and bio-inspired materials continues that legacy.”


MIT’s Science Policy Initiative holds 15th annual Executive Visit Days

Students and postdocs traveled to Washington to learn about federal science and technology policymaking.


"To really understand science policy, you have to step outside the lab and see it in action," says Jack Fletcher, an MIT PhD student in nuclear science and engineering and chair of the 15th annual Executive Visit Days (ExVD). 

Inspired by this mindset, ExVD — jointly organized by the MIT Science Policy Initiative (SPI) and the MIT Washington Office — convened a delegation of 21 MIT affiliates, including undergraduates, graduate students, and postdocs, in Washington Oct. 27-28. 

Although the government shutdown prevented the delegation’s usual visits to executive agencies, participants met with experts across the federal science and technology policy ecosystem. These discussions built connections in the nation’s capital, displayed how evidence interacts with political realities, and demonstrated how scientists, engineers, and business leaders can pursue impactful careers in public service. 

A recurring theme across meetings was that political realities and institutional constraints, not just evidence and analysis, shape policy outcomes. As Mykyta Kliapets, a PhD student at KU Leuven (Belgium) and a visiting student at the MIT Kavli Institute for Astrophysics and Space Research, reflected, “It was really helpful to hear how rarely straightforward policy environments are — sometimes, a solution that makes the most sense technically is not always politically feasible.” 

The group also heard how political forces directly impact science, from disruptions during government shutdowns to recent reductions in federal research support. Speakers underscored that effective science policy requires combined fluency in evidence, systems, and incentives.

For the first time, ExVD visited the Delegation of the European Union to the United States to meet with Francesco Maria Graziani, climate and energy counselor. He described E.U.-U.S. cooperation on energy and climate as “active and vital, but complex,” noting that the E.U. can struggle to navigate a diverse, multilevel, and variable U.S. policy landscape. “The E.U. and the U.S. share many goals, but we often operate on different timelines and with different tools,” said Graziani. He identified nuclear power, geothermal energy, and supply chain security as areas of continued E.U. and U.S. collaboration. 

Graziani also discussed ongoing collaborations like the Destination Earth project, which improves global climate models using U.S. state-level data. “As a European, hearing differences in how the U.S. navigates science policy gave me a new lens on how two advanced democracies balance innovation, regulation, and the urgency of scientific challenges,” said Sofia Karagianni, an MBA student at the MIT Sloan School of Management

The ExVD delegation also met with three MIT alumni at the Science and Technology Policy Institute (STPI). A federally funded research and development center, STPI provides technical and analytical support on science and technology issues to inform policy decisions by the White House Office of Science and Technology Policy (OSTP) and other federal sponsors. Recently, STPI’s research reports have focused on a number of topics including quantum computing, biotechnology, and artificial intelligence. The discussion at STPI emphasized the importance of conducting  objective analyses that have relevance for policymakers. Director Asha Balakrishnan explained how it is often useful to provide “options” in their reports, rather than “recommendations,” because policymakers benefit from understanding the advantages and disadvantages of potential policy actions.

Participants found the speakers’ reflections on career development and fellowships particularly valuable. Several speakers discussed their experiences with the AAAS Science and Technology Policy Fellowship, which places scientists and engineers in federal agencies and congressional offices for a year. 

“In speaking with former fellows, I learned just how transformative these fellowships can be for scientists seeking to apply their academic research backgrounds to a wide range of careers at the intersection of science and policy,” said Amanda Hornick, a recent doctoral graduate of the Harvard-MIT Program in Health Sciences and Technology. Eli Duggan, a graduate student in MIT's Technology and Policy Program, added that “seeing how the speakers’ work makes a real impact got me excited to apply my technical and policy background for the public good.”

The lessons from these conversations reflect the broader mission of the MIT Science Policy Initiative: to help the MIT community understand and engage with the policymaking process. SPI is a student- and postdoc-led organization dedicated to strengthening dialogue between MIT and the broader policy ecosystem. Each year, SPI organizes multiple trips to Washington, giving members the chance to meet directly with federal agencies and policymakers while exploring careers at the intersection of science, technology, and policy. These trips also spark connections and conversations that participants bring back to campus, enriching policy dialogue within the MIT community. 

SPI is grateful to the individuals and organizations who shared their time and insights at this year’s ExVD, giving participants a foundation to draw on as they explore career opportunities and the many ways technical expertise can shape public decision-making.


Resurrecting an MIT “learning by doing” tradition: NEET scholars install solar-powered charging station

The project was designed and built with novel “bio-composite” materials developed by the student team.


Students enrolled in MIT’s New Engineering Education Transformation (NEET) program recently collaborated across academic disciplines to design and construct a solar-powered charging station. Positioned in a quiet campus courtyard, the station provides the MIT community with climate-friendly power for phones, laptops, and tablets.

Its installation marked the “first time a cross-departmental team of undergraduates designed, created, and installed on campus a green technology artifact for the public good, as part of a class they took for credit,” says Amitava “Babi” Mitra, NEET founding executive director.

The project was very on-brand for the NEET program, which centers interdisciplinary, cross-departmental, and project-centric scholarship with experiential learning at its core. Launched in 2017 as an effort to reimagine undergraduate engineering education at MIT, NEET seeks to empower students to tackle complex societal challenges that straddle disciplines.

The solar-powered charging station project class is an integral part of NEET’s decarbonization-focused Climate and Sustainability Systems (CSS) “thread,” one of four pathways of study offered by the program. The class, 22.03/3.0061 (Introduction to Design Thinking and Rapid Prototyping), teaches the design and fabrication techniques used to create the station, such as laser cutting, 3D printing, computer-aided design (CAD), electronics prototyping, microcontroller programming, and composites manufacturing.

The project team included students majoring in chemical engineering, materials science and engineering, mechanical engineering, and nuclear science and engineering.

“What I really liked about this project was, at the beginning, it was really about ideation, about design, about brainstorming in ways that I haven’t seen before,” says NEET CSS student Aaron De Leon, a nuclear science and engineering major focused on clean energy development. 

During these brainstorming sessions, the team considered how their subjective design choices for the charging station would shape user experience, something De Leon, who enrolled in the class as a sophomore, says is often overlooked in engineering classes.

The team’s forest-inspired station design — complete with “tree trunks,” oyster mushroom-shaped desk space, and four solar panels curved to mimic the undulation of the forest canopy — was intended to evoke a sense of organic connectivity. The tree trunks were crafted from novel flax fiber-based composite layups the team developed through experiments designed to identify more sustainable alternatives to traditional composites.

The group also discussed how a dearth of device charging options made it difficult for students to work outside, according to NEET CSS student Celestina Pint, who enrolled in the class as a sophomore. The desk space was added to help MIT students work comfortably outdoors while also charging their devices with renewable energy.

Pint joined NEET because she wanted to “keep an open approach to climate and sustainability,” as opposed to relying on her materials science and engineering major alone, she says. “I like the interdisciplinary aspect.”

The project class presented abundant interdisciplinary learning opportunities that couldn’t be replicated in a purely theory-based curriculum, says Nathan Melenbrink, NEET lecturer, who teaches the project class and is the lead instructor for the NEET CSS thread.

For example, the team got a crash course in navigating real-world bureaucracy when they discovered that the installation of their charging station had to be approved by more than a dozen entities, including campus police, MIT’s insurance provider, and the campus facilities department.

The team also gained valuable experience with troubleshooting unanticipated design implementation challenges during the project’s fabrication phase.

“Adjustments had to be made,” Pint says. Once the station was installed, “it was interesting to see what was the same and what was different” from the team’s initial design.

This underscores a unique value of the project, according to NEET CSS student Tyler Ea, a fifth-year mechanical engineering major who joined the project team last year and is now a teaching assistant for the class.

Students “are able to take ownership of something physical, like a physical embodiment of their ideas, and something that they can point towards and say, ‘here’s something that I thought about, and this is how I went about building it, and then here’s the final result,’” he says.

While students only become eligible to join NEET in their second year, first-year students interested in the program were also able to learn from the solar-powered charging station project in the first-year discovery class SP.248 (The NEET Experience). After learning fundamental concepts in systems engineering, the class analyzed the station and suggested changes they thought would improve its design.

Melenbrink says student-built campus installations were once a hallmark of MIT’s academic culture, and he sees the NEET CSS solar-powered charging station project as an opportunity to help revive this tradition.

“What I hear from the old guard is that there was always somebody … lugging some giant, odd-looking prototype of something across campus,” Melenbrink says.

More collaborative, hands-on, student-led climate projects would also help the Institute meet its commitment to become a leading source of meaningful climate solutions, according to Elsa Olivetti, the Jerry McAfee (1940) Professor of Materials Science and Engineering and strategic advisor to the MIT Climate and Sustainability Consortium (MCSC).

“This local renewable energy project demonstrates that our campus community can learn through solution development,” she says. “Students don’t have to wait until they graduate or enter the job market to make a contribution.”

Students enrolled in this year’s Introduction to Design Thinking and Rapid Prototyping class will fabricate and install a new solar-powered charging station with a unique design. De Leon says he appreciates the latitude NEET students have to make the project their own.

“There was never the case of a professor saying, ‘We need to do it this way,’” he says. “I really liked that ability to learn as many things as you wanted to, and also have the autonomy to make your own design decisions along the way.”


Too sick to socialize: How the brain and immune system promote staying in bed

MIT researchers discover how an immune system molecule triggers neurons to shut down social behavior in mice modeling infection.


“I just can’t make it tonight. You have fun without me.” Across much of the animal kingdom, when infection strikes, social contact shuts down. A new study details how the immune and central nervous systems implement this sickness behavior.

It makes perfect sense that when we’re battling an infection, we lose our desire to be around others. That protects others from getting sick and lets us get much-needed rest. What hasn’t been as clear is how this behavior change happens.

In new research published Nov. 25 in Cell, scientists at MIT’s Picower Institute for Learning and Memory and collaborators used multiple methods to demonstrate causally that when the immune system cytokine interleukin-1 beta (IL-1β) reaches the IL-1 receptor 1 (IL-1R1) on neurons in a brain region called the dorsal raphe nucleus, that activates connections with the intermediate lateral septum to shut down social behavior.

“Our findings show that social isolation following immune challenge is self-imposed and driven by an active neural process, rather than a secondary consequence of physiological symptoms of sickness, such as lethargy,” says study co-senior author Gloria Choi, associate professor in MIT’s Department of Brain and Cognitive Sciences and a member of the Picower Institute.

Jun Huh, Harvard Medical School associate professor of immunology, is the paper’s co-senior author. The lead author is Liu Yang, a research scientist in Choi’s lab.

A molecule and its receptor

Choi and Huh’s long collaboration has identified other cytokines that affect social behavior by latching on to their receptors in the brain, so in this study their team hypothesized that the same kind of dynamic might cause social withdrawal during infection. But which cytokine? And what brain circuits might be affected?

To get started, Yang and her colleagues injected 21 different cytokines into the brains of mice, one by one, to see if any triggered social withdrawal the same way that giving mice LPS (a standard way of simulating infection) did. Only IL-1β injection fully recapitulated the same social withdrawal behavior as LPS. That said, IL-1β also made the mice more sluggish.

IL-1β affects cells when it hooks up with the IL-1R1, so the team next went looking across the brain for where the receptor is expressed. They identified several regions and examined individual neurons in each. The dorsal raphe nucleus (DRN) stood out among regions, both because it is known to modulate social behavior and because it is situated next to the cerebral aqueduct, which would give it plenty of exposure to incoming cytokines in cerebrospinal fluid. The experiments identified populations of DRN neurons that express IL-1R1, including many involved in making the crucial neuromodulatory chemical serotonin.

From there, Yang and the team demonstrated that IL-1β activates those neurons, and that activating the neurons promotes social withdrawal. Moreover, they showed that inhibiting that neural activity prevented social withdrawal in mice treated with IL-1β, and they showed that shutting down the IL-1R1 in the DRN neurons also prevented social withdrawal behavior after IL-1β injection or LPS exposure. Notably, these experiments did not change the lethargy that followed IL-1β or LPS, helping to demonstrate that social withdrawal and lethargy occur through different means.

“Our findings implicate IL-1β as a primary effector driving social withdrawal during systemic immune activation,” the researchers wrote in Cell.

Tracing the circuit

With the DRN identified as the site where neurons receiving IL-1β drove social withdrawal, the next question was what circuit they effected that behavior change through. The team traced where the neurons make their circuit projections and found several regions that have a known role in social behavior. Using optogenetics, a technology that engineers cells to become controllable with flashes of light, the scientists were able to activate the DRN neurons’ connections with each downstream region. Only activating the DRN’s connections with the intermediate lateral septum caused the social withdrawal behaviors seen with IL-1β injection or LPS exposure.

In a final test, they replicated their results by exposing some mice to salmonella.

“Collectively, these results reveal a role for IL-1R1-expressing DRN neurons in mediating social withdrawal in response to IL-1β during systemic immune challenge,” the researchers wrote.

Although the study revealed the cytokine, neurons, and circuit responsible for social withdrawal in mice in detail and with demonstrations of causality, the results still inspire new questions. One is whether IL-1R1 neurons affect other sickness behaviors. Another is whether serotonin has a role in social withdrawal or other sickness behaviors.

In addition to Yang, Choi, and Huh, the paper’s other authors are Matias Andina, Mario Witkowski, Hunter King, and Ian Wickersham.

Funding for the research came from the National Institute of Mental Health, the National Research Foundation of Korea, the Denis A. and Eugene W. Chinery Fund for Neurodevelopmental Research, the Jeongho Kim Neurodevelopmental Research Fund, Perry Ha, the Simons Center for the Social Brain, the Simons Foundation Autism Research Initiative, The Picower Institute for Learning and Memory, and The Freedom Together Foundation.


Pompeii offers insights into ancient Roman building technology

MIT researchers analyzed a recently discovered ancient construction site to shed new light on a material that has endured for thousands of years.


Concrete was the foundation of the ancient Roman empire. It enabled Rome’s storied architectural revolution as well as the construction of buildings, bridges, and aqueducts, many of which are still used some 2,000 years after their creation.

In 2023, MIT Associate Professor Admir Masic and his collaborators published a paper describing the manufacturing process that gave Roman concrete its longevity: Lime fragments were mixed with volcanic ash and other dry ingredients before the addition of water. Once water is added to this dry mix, heat is produced. As the concrete sets, this “hot-mixing” process traps and preserves the highly reactive lime as small, white, gravel-like features. When cracks form in the concrete, the lime clasts redissolve and fill the cracks, giving the concrete self-healing properties.

There was only one problem: The process Masic’s team described was different from the one described by the famed ancient Roman architect Vitruvius. Vitruvius literally wrote the book on ancient architecture. His highly influential work, “De architectura,” written in the 1st century B.C.E., is the first known book on architectural theory. In it, Vitruvius says that Romans added water to lime to create a paste-like material before mixing it with other ingredients.

“Having a lot of respect for Vitruvius, it was difficult to suggest that his description may be inaccurate,” Masic says. “The writings of Vitruvius played a critical role in stimulating my interest in ancient Roman architecture, and the results from my research contradicted these important historical texts.”

Now, Masic and his collaborators have confirmed that hot-mixing was indeed used by the Romans, a conclusion he reached by studying a newly discovered ancient construction site in Pompeii that was exquisitely preserved by the volcanic eruption of Mount Vesuvius in the year 79 C.E. They also characterized the volcanic ash material the Romans mixed with the lime, finding a surprisingly diverse array of reactive minerals that further added to the concrete’s ability to repair itself many years after these monumental structures were built.

“There is the historic importance of this material, and then there is the scientific and technological importance of understanding it,” Masic explains. “This material can heal itself over thousands of years, it is reactive, and it is highly dynamic. It has survived earthquakes and volcanoes. It has endured under the sea and survived degradation from the elements. We don’t want to completely copy Roman concrete today. We just want to translate a few sentences from this book of knowledge into our modern construction practices.”

The findings are described today in Nature Communications. Joining Masic on the paper are first authors Ellie Vaserman ’25 and Principal Research Scientist James Weaver, along with Associate Professor Kristin Bergmann, PhD candidate Claire Hayhow, and six other Italian collaborators.

Uncovering ancient secrets

Masic has spent close to a decade studying the chemical composition of the concrete that allowed Rome’s famous structures to endure for so much longer than their modern counterparts. His 2023 paper analyzed the material’s chemical composition to deduce how it was made.

That paper used samples from a city wall in Priverno in southwest Italy, which was conquered by the Romans in the 4th century B.C.E. But there was a question as to whether this wall was representative of other concrete structures built throughout the Roman empire.

The recent discovery by archaeologists of an active ancient construction site in Pompeii (complete with raw material piles and tools) therefore offered an unprecedented opportunity.

For the study, the researchers analyzed samples from these pre-mixed dry material piles, a wall that was in the process of being built, completed buttress and structural walls, and mortar repairs in an existing wall.

“We were blessed to be able to open this time capsule of a construction site and find piles of material ready to be used for the wall,” Masic says. “With this paper, we wanted to clearly define a technology and associate it with the Roman period in the year 79 C.E.”

The site offered the clearest evidence yet that the Romans used hot-mixing in concrete production. Not only did the concrete samples contain the lime clasts described in Masic’s previous paper, but the team also discovered intact quicklime fragments pre-mixed with other ingredients in a dry raw material pile, a critical first step in the preparation of hot-mixed concrete.

Bergman, an associate professor of earth and planetary sciences, helped develop tools for differentiating the materials at the site.

“Through these stable isotope studies, we could follow these critical carbonation reactions over time, allowing us to distinguish hot-mixed lime from the slaked lime originally described by Vitruvius,” Masic says. “These results revealed that the Romans prepared their binding material by taking calcined limestone (quicklime), grinding them to a certain size, mixing it dry with volcanic ash, and then eventually adding water to create a cementing matrix.”

The researchers also analyzed the volcanic ingredients in the cement, including a type of volcanic ash called pumice. They found that the pumice particles chemically reacted with the surrounding pore solution over time, creating new mineral deposits that further strengthened the concrete.

Rewriting history

Masic says the archaeologists listed as co-authors on the paper were indispensable to the study. When Masic first entered the Pompeii site, as he inspected the perfectly preserved work area, tears came to his eyes.

“I expected to see Roman workers walking between the piles with their tools,” Masic says. “It was so vivid, you felt like you were transported in time. So yes, I got emotional looking at a pile of dirt. The archaeologists made some jokes.”

Masic notes that calcium is a key component in both ancient and modern concretes, so understanding how it reacts over time holds lessons for understanding dynamic processes in modern cement as well. Towards these efforts, Masic has also started a company, DMAT, that uses lessons from ancient Roman concrete to create long-lasting modern concretes.

“This is relevant because Roman cement is durable, it heals itself, and it’s a dynamic system,” Masic says. “The way these pores in volcanic ingredients can be filled through recrystallization is a dream process we want to translate into our modern materials. We want materials that regenerate themselves.”

As for Vitruvius, Masic guesses that he may have been misinterpreted. He points out that Vitruvius also mentions latent heat during the cement mixing process, which could suggest hot-mixing after all.

The work was supported, in part, by the MIT Research Support Commmittee (RSC) and the MIT Concrete Sustainability Hub.


Astrocyte diversity across space and time

A new atlas charts the diversity of an influential cell type in the brains of mice and marmosets.


When it comes to brain function, neurons get a lot of the glory. But healthy brains depend on the cooperation of many kinds of cells. The most abundant of the brain’s non-neuronal cells are astrocytes, star-shaped cells with a lot of responsibilities. Astrocytes help shape neural circuits, participate in information processing, and provide nutrient and metabolic support to neurons. Individual cells can take on new roles throughout their lifetimes, and at any given time, the astrocytes in one part of the brain will look and behave differently than the astrocytes somewhere else.

After an extensive analysis by researchers at MIT, neuroscientists now have an atlas detailing astrocytes’ dynamic diversity. Its maps depict the regional specialization of astrocytes across the brains of both mice and marmosets — two powerful models for neuroscience research — and show how their populations shift as brains develop, mature, and age. 

The open-access study, reported in the Nov. 20 issue of the journal Neuron, was led by Guoping Feng, the James W. (1963) and Patricia T. Poitras Professor of Brain and Cognitive Sciences at MIT. This work was supported by the Hock E. Tan and K. Lisa Yang Center for Autism Research, part of the Yang Tan Collective at MIT, and the National Institutes of Health’s BRAIN Initiative.

“It’s really important for us to pay attention to non-neuronal cells’ role in health and disease,” says Feng, who is also the associate director of the McGovern Institute for Brain Research and the director of the Hock E. Tan and K. Lisa Yang Center for Autism Research at MIT. And indeed, these cells — once seen as mere supporting players — have gained more of the spotlight in recent years. Astrocytes are known to play vital roles in the brain’s development and function, and their dysfunction seems to contribute to many psychiatric disorders and neurodegenerative diseases. “But compared to neurons, we know a lot less — especially during development,” Feng adds.

Probing the unknown

Feng and Margaret Schroeder, a former graduate student in his lab, thought it was important to understand astrocyte diversity across three axes: space, time, and species. They knew from earlier work in the lab, done in collaboration with Steve McCarroll’s lab at Harvard University and led by Fenna Krienen in his group, that in adult animals, different parts of the brain have distinctive sets of astrocytes.

“The natural question was, how early in development do we think this regional patterning of astrocytes starts?” Schroeder says.

To find out, she and her colleagues collected brain cells from mice and marmosets at six stages of life, spanning embryonic development to old age. For each animal, they sampled cells from four different brain regions: the prefrontal cortex, the motor cortex, the striatum, and the thalamus.

Then, working with Krienen, who is now an assistant professor at Princeton University, they analyzed the molecular contents of those cells, creating a profile of genetic activity for each one. That profile was based on the mRNA copies of genes found inside the cell, which are known collectively as the cell’s transcriptome. Determining which genes a cell is using, and how active those genes are, gives researchers insight into a cell’s function and is one way of defining its identity.

Dynamic diversity

After assessing the transcriptomes of about 1.4 million brain cells, the group focused in on the astrocytes, analyzing and comparing their patterns of gene expression. At every life stage, from before birth to old age, the team found regional specialization: astrocytes from different brain regions had similar patterns of gene expression, which were distinct from those of astrocytes in other brain regions.

This regional specialization was also apparent in the distinct shapes of astrocytes in different parts of the brain, which the team was able to see with expansion microscopy, a high-resolution imaging method developed by McGovern colleague Edward Boyden that reveals fine cellular features.

Notably, the astrocytes in each region changed as animals matured. “When we looked at our late embryonic time point, the astrocytes were already regionally patterned. But when we compare that to the adult profiles, they had completely shifted again,” Schroeder says. “So there’s something happening over postnatal development.” The most dramatic changes the team detected occurred between birth and early adolescence, a period during which brains rapidly rewire as animals begin to interact with the world and learn from their experiences.

Feng and Schroeder suspect that the changes they observed may be driven by the neural circuits that are sculpted and refined as the brain matures. “What we think they’re doing is kind of adapting to their local neuronal niche,” Schroeder says. “The types of genes that they are up-regulating and changing during development points to their interaction with neurons.” Feng adds that astrocytes may change their genetic programs in response to nearby neurons, or alternatively, they might help direct the development or function of local circuits as they adopt identities best suited to support particular neurons.

Both mouse and marmoset brains exhibited regional specialization of astrocytes and changes in those populations over time. But when the researchers looked at the specific genes whose activity defined various astrocyte populations, the data from the two species diverged. Schroeder calls this a note of caution for scientists who study astrocytes in animal models, and adds that the new atlas will help researchers assess the potential relevance of findings across species.

Beyond astrocytes

With a new understanding of astrocyte diversity, Feng says his team will pay close attention to how these cells are impacted by the disease-related genes they study and how those effects change during development. He also notes that the gene expression data in the atlas can be used to predict interactions between astrocytes and neurons. “This will really guide future experiments: how these cells’ interactions can shift with changes in the neurons or changes in the astrocytes,” he says.

The Feng lab is eager for other researchers to take advantage of the massive amounts of data they generated as they produced their atlas. Schroeder points out that the team analyzed the transcriptomes of all kinds of cells in the brain regions they studied, not just astrocytes. They are sharing their findings so researchers can use them to understand when and where specific genes are used in the brain, or dig in more deeply to further to explore the brain’s cellular diversity.


Prognostic tool could help clinicians identify high-risk cancer patients

Using a versatile problem-solving framework, researchers show how early relapse in lymphoma patients influences their chance for survival.


Aggressive T-cell lymphoma is a rare and devastating form of blood cancer with a very low five-year survival rate. Patients often relapse after receiving initial therapy, making it especially challenging for clinicians to keep this destructive disease in check.

In a new study, researchers from MIT, in collaboration with researchers involved in the PETAL consortium at Massachusetts General Hospital, identified a practical and powerful prognostic marker that could help clinicians identify high-risk patients early, and potentially tailor treatment strategies to improve survival.

The team found that, when patients relapse within 12 months of initial therapy, their chances of survival decline dramatically. For these patients, targeted therapies might improve their chances for survival, compared to traditional chemotherapy, the researchers say.

According to their analysis, which used data collected from thousands of patients all over the world, the finding holds true across patient subgroups, regardless of the patient’s initial therapy or their score in a commonly used prognostic index.

A causal inference framework called Synthetic Survival Controls (SSC), developed as part of MIT graduate student Jessy (Xinyi) Han’s thesis, was central to this analysis. This versatile framework helps to answer “when-if” questions — to estimate how the timing of outcomes would shift under different interventions — while overcoming the limitations of inconsistent and biased data.

The identification of novel risk groups could guide clinicians as they select therapies to improve overall survival. For instance, a clinician might prioritize early-phase clinical trials over canonical therapies for this cohort of patients. The results could inform inclusion criteria for some clinical trials, according to the researchers.

The causal inference framework for survival analysis can also be applied more broadly. For instance, the MIT researchers have used it in areas like criminal justice to study how structural factors drive recidivism.

“Often we don’t only care about what will happen, but when the target event will happen. These when-if problems have remained under the radar for a long time, but they are common in a lot of domains. We’ve shown here that, to answer these questions with data, you need domain experts to provide insight and good causal inference methods to close the loop,” says Devavrat Shah, the Andrew and Erna Viterbi Professor in Electrical Engineering and Computer Science at MIT, a member of Institute for Data, Systems and Society (IDSS) and of the Laboratory for Information and Decision Systems (LIDS), and co-author of the study.

Shah is joined on the paper by many co-authors, including Han, who is co-advised by Shah and Fotini Christia, the Ford International Professor of the Social Sciences in the Department of Political Science and director of IDSS; and corresponding authors Mark N. Sorial, a clinical pharmacist and investigator at the Dana-Farber Cancer Institute, and Salvia Jain, a clinician-investigator at the Massachusetts General Hospital Cancer Center, founder of the global PETAL consortium, and an assistant professor of medicine at Harvard Medical School. The research appears today in the journal Blood.

Estimating outcomes

The MIT researchers have spent the past few years developing the Synthetic Survival Control causal inference framework, which enables them to answer complex “when-if” questions when using available data is statistically challenging. Their approach estimates when a target event happens if a certain intervention is used.

In this paper, the researchers investigated an aggressive cancer called nodal mature T-cell lymphoma, and whether a certain prognostic marker led to worse outcomes. The marker, TTR12, signifies that a patient relapsed within 12 months of initial therapy.

They applied their framework to estimate when a patient will die if they have TTR12, and how their survival trajectory would be different if they do not have this prognostic marker.

“No experiment can answer that question because we are asking about two outcomes for the same patient. We have to borrow information from other patients to estimate, counterfactually, what a patient’s survival outcome would have been,” Han explains.

Answering these types of questions is notoriously difficult due to biases in the available observational data. Plus, patient data gathered from an international cohort bring their own unique challenges. For instance, a clinical dataset often contains some historical data about a patient, but at some point the patient may stop treatment, leading to incomplete records.

In addition, if a patient receives a specific treatment, that might impact how long they will survive, adding to the complexity of the data. Plus, for each patient, the researchers only observe one outcome on how long the patient survives — limiting the amount of data available.

Such issues lead to suboptimal performance of many classical methods.

The Synthetic Survival Control framework can overcome these challenges. Even though the researchers don’t know all the details for each patient, their method stitches information from multiple other patients together in such a way that it can estimate survival outcomes.

Importantly, their method is robust to specific modeling assumptions, making it broadly applicable in practice. 

The power of prognostication

The researchers’ analysis revealed that TTR12 patients consistently had much greater risk of death within five years of initial therapy than patients without the marker. This was true no matter the initial therapy the patients received or which subgroup they fell into.

“This tells us that early relapse is a very important prognosis. This acts as a signal to clinicians so they can think about tailored therapies for these patients that can overcome resistance in second-line or third-line,” Han says.

Moving forward, the researchers are looking to expand this analysis to include high-dimensional genomics data. This information could be used to develop bespoke treatments that can avoid relapse within 12 months.

“Based on our work, there is already a risk calculation tool being used by clinicians. With more information, we can make it a richer tool that can provide more prognostic details,” Shah says.

They are also applying the framework to other domains.

For instance, in a paper recently presented at the Conference on Neural Information Processing Systems, the researchers identified a dramatic difference in the recidivism rate among prisoners of different races that begins about seven months after release. A possible explanation is the different access to long-term support by different racial groups. They are also investigating individuals’ decisions to leave insurance companies, while exploring other domains where the framework could generate actionable insights.

“Partnering with domain experts is crucial because we want to demonstrate that our methods are of value in the real world. We hope these tools can be used to positively impact individuals across society,” Han says.

This work was funded, in part, by Daiichi Sankyo, Secure Bio, Inc., Acrotech Biopharma, Kyowa Kirin, the Center for Lymphoma Research, the National Cancer Institute, Massachusetts General Hospital, the Reid Fund for Lymphoma Research, the American Cancer Society, and the Scarlet Foundation.


NIH Director Jay Bhattacharya visits MIT

In a conversation with Rep. Jake Auchincloss, Bhattacharya focused on the agency’s policy goals and funding practices.


National Institutes of Health (NIH) Director Jay Bhattacharya visited MIT on Friday, engaging in a wide-ranging discussion about policy issues and research aims at an event also featuring Rep. Jake Auchincloss MBA ’16 of Massachusetts.

The forum consisted of a dialogue between Auchincloss and Bhattacharya, followed by a question-and-answer session with an audience that included researchers from the greater Boston area. The event was part of a daylong series of stops Bhattacharya and Auchincloss made around Boston, a world-leading hub of biomedical research.

“I was joking with Dr. Bhattacharya that when the NIH director comes to Massachusetts, he gets treated like a celebrity, because we do science, and we take science very seriously here,” Auchincloss quipped at the outset.

Bhattacharya said he was “delighted” to be visiting, and credited the thousands of scientists who participate in peer review for the NIH. “The reason why the NIH succeeds is the willingness and engagement of the scientific community,” he said.

In response to an audience question, Bhattacharya also outlined his overall vision of the NIH’s portfolio of projects.

“You both need investments in ideas that are not tested, just to see if something works. You don’t know in advance,” he said. “And at the same time, you need an ecosystem that tests those ideas rigorously and winnows those ideas to the ones that actually work, that are replicable. A successful portfolio will have both elements in it.”

MIT President Sally A. Kornbluth gave opening remarks at the event, welcoming Bhattacharya and Auchincloss to campus and noting that the Institute’s earliest known NIH grant on record dates to 1948. In recent decades, biomedical research at MIT has boomed, expanding across a wide range of frontier fields.

Indeed, Kornbluth noted, MIT’s federally funded research projects during U.S. President Trump’s first term include a method for making anesthesia safer, especially for children and the elderly; a new type of expanding heart valve for children that eliminates the need for repeated surgeries; and a noninvasive Alzheimer’s treatment using sound and light stimulation, which is currently in clinical trials.

“Today, researchers across our campus pursue pioneering science on behalf of the American people, with profoundly important results,” Kornbluth said.

“The hospitals, universities, startups, investors, and companies represented here today have made greater Boston an extraordinary magnet for talent,” Kornbluth added. “Both as a force for progress in human health and an engine of economic growth, this community of talent is a precious national asset. We look forward to working with Dr. Bhattacharya to build on its strengths.”

The discussion occurred amid uncertainty about future science funding levels and pending changes in the NIH’s grant-review processes. The NIH has announced a “unified strategy” for reviewing grant applications that may lead to more direct involvement in grant decisions by directors of the 27 NIH institutes and centers, along with other changes that could shift the types of awards being made.

Auchincloss asked multiple questions about the ongoing NIH changes; about 10 audience members from a variety of institutions also posed a range of questions to Bhattacharya, often about the new grant-review process and the aims of the changes.

“The unified funding strategy is a way to allow institute direcors to look at the full range of scoring, including scores on innovation, and pick projects that look like they are promising,” Bhattacharya said in response to one of Auchincloss’ queries.

One audience member also emphasized concerns about the long-term effects of funding uncertainties on younger scientists in the U.S.

“The future success of the American biomedical enterprise depends on us training the next generation of scientists,” Bhattacharya acknowledged.

Bhattacharya is the 18th director of the NIH, having been confirmed by the U.S. Senate in March. He has served as a faculty member at Stanford University, where he received his BA, MA, MD, and PhD, and is currently a professor emeritus. During his career, Bhattacharya’s work has often examined the economics of health care, though his research has ranged broadly across topics, in over 170 published papers. He has also served as director of the Center on the Demography and Economics of Health and Aging at Stanford University.

Auchincloss is in his third term as the U.S. Representative to Congress from the 4th district in Massachusetts, having first been elected in 2020. He is also a major in the Marine Corps Reserve, and received his MBA from the MIT Sloan School of Management.

Ian Waitz, MIT’s vice president for research, concluded the session with a note of thanks to Auchincloss and Bhattacharya for their “visit to the greater Boston ecosystem which has done so much for so many and contributed obviously to the NIH mission that you articulated.” He added: “We have such a marvelous history in this region in making such great gains for health and longevity, and we’re here to do more to partner with you.”


When companies “go green,” air quality impacts can vary dramatically

Cutting air travel and purchasing renewable energy can lead to different effects on overall air quality, even while achieving the same CO2 reduction, new research shows.


Many organizations are taking actions to shrink their carbon footprint, such as purchasing electricity from renewable sources or reducing air travel.

Both actions would cut greenhouse gas emissions, but which offers greater societal benefits?

In a first step toward answering that question, MIT researchers found that even if each activity reduces the same amount of carbon dioxide emissions, the broader air quality impacts can be quite different.

They used a multifaceted modeling approach to quantify the air quality impacts of each activity, using data from three organizations. Their results indicate that air travel causes about three times more damage to air quality than comparable electricity purchases.

Exposure to major air pollutants, including ground-level ozone and fine particulate matter, can lead to cardiovascular and respiratory disease, and even premature death.

In addition, air quality impacts can vary dramatically across different regions. The study shows that air quality effects differ sharply across space because each decarbonization action influences pollution at a different scale. For example, for organizations in the northeast U.S., the air quality impacts of energy use affect the region, but the impacts of air travel are felt globally. This is because associated pollutants are emitted at higher altitudes.

Ultimately, the researchers hope this work highlights how organizations can prioritize climate actions to provide the greatest near-term benefits to people’s health.

“If we are trying to get to net zero emissions, that trajectory could have very different implications for a lot of other things we care about, like air quality and health impacts. Here we’ve shown that, for the same net zero goal, you can have even more societal benefits if you figure out a smart way to structure your reductions,” says Noelle Selin, a professor in the MIT Institute for Data, Systems, and Society (IDSS) and the Department of Earth, Atmospheric and Planetary Sciences (EAPS); director of the Center for Sustainability Science and Strategy; and senior author of the study.

Selin is joined on the paper by lead author Yuang (Albert) Chen, an MIT graduate student; Florian Allroggen, a research scientist in the MIT Department of Aeronautics and Astronautics; Sebastian D. Eastham, an associate professor in the Department of Aeronautics at Imperial College of London; Evan Gibney, an MIT graduate student; and William Clark, the Harvey Brooks Research Professor of International Science at Harvard University. The research was published Friday in Environmental Research Letters.

A quantification quandary

Climate scientists often focus on the air quality benefits of national or regional policies because the aggregate impacts are more straightforward to model.

Organizations’ efforts to “go green” are much harder to quantify because they exist within larger societal systems and are impacted by these national policies.

To tackle this challenging problem, the MIT researchers used data from two universities and one company in the greater Boston area. They studied whether organizational actions that remove the same amount of CO2 from the atmosphere would have an equivalent benefit on improving air quality.

“From a climate standpoint, CO2 has a global impact because it mixes through the atmosphere, no matter where it is emitted. But air quality impacts are driven by co-pollutants that act locally, so where those emissions occur really matters,” Chen says.

For instance, burning fossil fuels leads to emissions of nitrogen oxides and sulfur dioxide along with CO2. These co-pollutants react with chemicals in the atmosphere to form fine particulate matter and ground-level ozone, which is a primary component of smog.

Different fossil fuels cause varying amounts of co-pollutant emissions. In addition, local factors like weather and existing emissions affect the formation of smog and fine particulate matter. The impacts of these pollutants also depend on the local population distribution and overall health.

“You can’t just assume that all CO2-reduction strategies will have equivalent near-term impacts on sustainability. You have to consider all the other emissions that go along with that CO2,” Selin says.

The researchers used a systems-level approach that involved connecting multiple models. They fed the organizational energy consumption and flight data into this systems-level model to examine local and regional air quality impacts.

Their approach incorporated many interconnected elements, such as power plant emissions data, statistical linkages between air quality and mortality outcomes, and aviation emissions associated with specific flight routes. They fed those data into an atmospheric chemistry transport model to calculate air quality and climate impacts for each activity.

The sheer breadth of the system created many challenges.

“We had to do multiple sensitivity analyses to make sure the overall pipeline was working,” Chen says.

Analyzing air quality

At the end, the researchers monetized air quality impacts to compare them with the climate impacts in a consistent way. Monetized climate impacts of CO2 emissions based on prior literature are about $170 per ton (expressed in 2015 dollars), representing the financial cost of damages caused by climate change.

Using the same method as used to monetize the impact of CO2, the researchers calculated that air quality damages associated with electricity purchases are an additional $88 per ton of CO2, while the damages from air travel are an additional $265 per ton.

This highlights how the air quality impacts of a ton of emitted CO2 depend strongly on where and how the emissions are produced.

“A real surprise was how much aviation impacted places that were really far from these organizations. Not only were flights more damaging, but the pattern of damage, in terms of who is harmed by air pollution from that activity, is very different than who is harmed by energy systems,” Selin says.

Most airplane emissions occur at high altitudes, where differences in atmospheric chemistry and transport can amplify their air quality impacts. These emissions are also carried across continents by atmospheric winds, affecting people thousands of miles from their source.

Nations like India and China face outsized air quality impacts from such emissions due to the higher level of existing ground-level emissions, which exacerbates the formation of fine particulate matter and smog.

The researchers also conducted a deeper analysis of short-haul flights. Their results showed that regional flights have a relatively larger impact on local air quality than longer domestic flights.

“If an organization is thinking about how to benefit the neighborhoods in their backyard, then reducing short-haul flights could be a strategy with real benefits,” Selin says.

Even in electricity purchases, the researchers found that location matters.

For instance, fine particulate matter emissions from power plants caused by one university are in a densely populated region, while emissions caused by the corporation fall over less populated areas.

Due to these population differences, the university’s emissions resulted in 16 percent more estimated premature deaths than those of the corporation, even though the climate impacts are identical.

“These results show that, if organizations want to achieve net zero emissions while promoting sustainability, which unit of CO2 gets removed first really matters a lot,” Chen says.

In the future, the researchers want to quantify the air quality and climate impacts of train travel, to see whether replacing short-haul flights with train trips could provide benefits.

They also want to explore the air quality impacts of other energy sources in the U.S., such as data centers.

This research was funded, in part, by Biogen, Inc., the Italian Ministry for Environment, Land, and Sea, and the MIT Center for Sustainability Science and Strategy. 


Paula Hammond named dean of the School of Engineering

A chemical engineer who now serves as executive vice provost, Hammond will succeed Anantha Chandrakasan.


Paula Hammond ’84, PhD ’93, an Institute Professor and MIT’s executive vice provost, has been named dean of MIT’s School of Engineering, effective Jan. 16. She will succeed Anantha Chandrakasan, the Vannevar Bush Professor of Electrical Engineering and Computer Science, who was appointed MIT’s provost in July.

Hammond, who was head of the Department of Chemical Engineering from 2015 to 2023, has also served as MIT’s vice provost for faculty. She will be the first woman to hold the role of dean of MIT’s School of Engineering.

“From the rigor and creativity of her scientific work to her outstanding record of service to the Institute, Paula Hammond represents the very best of MIT,” says MIT President Sally Kornbluth. “Wise, thoughtful, down-to-earth, deeply curious, and steeped in MIT’s culture and values, Paula will be a highly effective leader for the School of Engineering. I’m delighted she accepted this new challenge.”

Hammond, who is also a member of MIT’s Koch Institute for Integrative Cancer Research, has earned many accolades for her work developing polymers and nanomaterials that can be used for applications including drug delivery, regenerative medicine, noninvasive imaging, and battery technology.

Chandrakasan announced Hammond’s appointment today in an email to the MIT community, writing, “Ever since enrolling at MIT as an undergraduate, Paula has built a remarkable record of accomplishment in scholarship, teaching, and service. Faculty, staff, and students across the Institute praise her wisdom, selflessness, and kindness, especially when it comes to enabling others’ professional growth and success.”

“Paula is a scholar of extraordinary distinction. It is hard to overstate the value of the broad contributions she has made in her field, which have significantly expanded the frontiers of knowledge,” Chandrakasan told MIT News. “Any one of her many achievements could stand as the cornerstone of an outstanding academic career. In addition, her investment in mentoring the next generation of scholars and building community is unparalleled.”

Chandrakasan also thanked Professor Maria Yang, who has served as the school’s interim dean in recent months. “In a testament to her own longstanding contributions to the School of Engineering, Maria took on the deanship even while maintaining leadership roles with the Ideation Lab, D-Lab, and Morningside Academy for Design. For her excellent service and leadership, Maria deserves our deep appreciation,” he wrote to the community.

Building a sense of community

Throughout her career at MIT, Hammond has helped to create a supportive environment in which faculty and students can do their best work. As vice provost for faculty, a role Hammond assumed in 2023, she developed and oversaw new efforts to improve faculty recruitment and retention, mentoring, and professional development. Earlier this year, she took on additional responsibilities as executive vice provost, providing guidance and oversight for a number of Institute-wide initiatives.

As head of the Department of Chemical Engineering, Hammond worked to strengthen the department’s sense of community and initiated a strategic planning process that led to more collaborative research between faculty members. Under her leadership, the department also launched a major review of its undergraduate curriculum and introduced more flexibility into the requirements for a chemical engineering degree.

Another major priority was ensuring that faculty had the support they needed to pursue new research goals. To help achieve that, she established and raised funds for a series of Faculty Research Innovation Fund grants for mid-career faculty who wanted to explore fresh directions.

“I really enjoyed enabling faculty to explore new areas, finding ways to resource them, making sure that they had the right mentoring early in their career and the ‘wind beneath their wings’ that they needed to get where they wanted to go,” she says. “That, to me, was extremely fulfilling.”

Before taking on her official administrative roles, Hammond served the Institute through her work chairing committees that contributed landmark reports on gender and race at MIT: the Initiative for Faculty Race and Diversity and the Academic and Organizational Relationships Working Group.

In her new role as dean, Hammond plans to begin by consulting with faculty across the School of Engineering to learn more about their needs.

“I like to start with conversations,” she says. “I’m very excited about the idea of visiting each of the departments, finding out what’s on the minds of the faculty, and figuring out how we can meaningfully address their needs and continue to build and grow an excellent engineering program.”

One of her goals is to promote greater cross-disciplinarity in MIT’s curriculum, in part by encouraging and providing resources for faculty to develop more courses that bridge multiple departments.

“There are some barriers that exist between departments, because we all need to teach our core requirements,” she says. “I am very interested in collaborating with departments to think about how we can lower barriers to allow faculty to co-teach, or to perhaps look at different course structures that allow us to teach a core component and then have it branch to a more specialized component.”

She also hopes to guide MIT’s engineering departments in finding ways to incorporate artificial intelligence into their curriculum, and to give students greater opportunity for relevant hands-on experiences in engineering.

“I am particularly excited to build from the strong cross-disciplinary efforts and the key strategic initiatives that Anantha launched during his time as dean,” Hammond says. “I believe we have incredible opportunities to build off these critical areas at the interfaces of science, engineering, the humanities, arts, design, and policy, and to create new emergent fields. MIT should be the leader in providing educational foundations that prepare our students for a highly interdisciplinary and AI-enabled world, and a setting that enables our researchers and scholars to solve the most difficult and urgent problems of the world.”

A pioneer in nanotechnology

Hammond grew up in Detroit, where her father was a PhD biochemist who ran the health laboratories for the city of Detroit. Her mother founded a nursing school at Wayne County Community College, and both parents encouraged her interest in science. As an undergraduate at MIT, she majored in chemical engineering with a focus on polymer chemistry.

After graduating in 1984, Hammond spent two years working as a process engineer at Motorola, then earned a master’s degree in chemical engineering from Georgia Tech. She realized that she wanted to pursue a career in academia, and returned to MIT to earn a PhD in polymer science technology. After finishing her degree in 1993, she spent a year and a half as a postdoc at Harvard University before joining the MIT faculty in 1995.

She became a full professor in 2006, and in 2021, she was named an Institute Professor, the highest honor bestowed by MIT. In 2010, Hammond joined MIT’s Koch Institute for Integrative Cancer Research, where she leads a lab that is developing novel nanomaterials a variety of applications, with a primary focus on treatments and diagnostics for ovarian cancer.

Early in her career, Hammond developed a technique for generating functional thin-film materials by stacking layers of charged polymeric materials. This approach can be used to build polymers with highly controlled architectures by alternately exposing a surface to positively and negatively charged particles.

She has used this layer-by-layer assembly technique to build ultrathin batteries, fuel cell electrodes, and drug delivery nanoparticles that can be specifically targeted to cancer cells. These particles can be tailored to carry chemotherapy drugs such as cisplatin, immunotherapy agents, or nucleic acids such as messenger RNA.

In recognition of her pioneering research, Hammond was awarded the 2024 National Medal of Technology and Innovation. She was also the 2023-24 recipient of MIT’s Killian Award, which honors extraordinary professional achievements by an MIT faculty member. Her many other awards include the Benjamin Franklin Medal in Chemistry in 2024, the ACS Award in Polymer Science in 2018, the American Institute of Chemical Engineers Charles M. A. Stine Award in Materials Engineering and Science in 2013, and the Ovarian Cancer Research Program Teal Innovator Award in 2013.

Hammond has also been honored for her dedication to teaching and mentoring. As a reflection of her excellence in those areas, she was awarded the Irwin Sizer Award for Significant Improvements to MIT Education, the Henry Hill Lecturer Award in 2002, and the Junior Bose Faculty Award in 2000. She also co-chaired the recent Ad Hoc Committee on Faculty Advising and Mentoring, and has been selected as a “Committed to Caring” honoree for her work mentoring students and postdocs in her research group.

Hammond has served on the President’s Council of Advisors on Science and Technology, as well as the U.S. Secretary of Energy Scientific Advisory Board, the NIH Center for Scientific Review Advisory Council, and the Board of Directors of the American Institute of Chemical Engineers. Additionally, she is one of a small group of scientists who have been elected to the National Academies of Engineering, Sciences, and Medicine.


MADMEC winners develop spray-on coating to protect power lines from ice

Placing first in the MADMEC innovation contest, the MITten team aims to curb costly power outages during winter storms.


A spray-on coating to keep power lines standing through an ice storm may not be the obvious fix for winter outages — but it’s exactly the kind of innovation that happens when MIT students tackle a sustainability challenge.

“The big threat to the power line network is winter icing that causes huge amounts of downed lines every year,” says Trevor Bormann, a graduate student in MIT’s Department of Materials Science and Engineering (DMSE) and member of MITten, the winning team in the 2025 MADMEC innovation contest. Fixing those outages is hugely carbon-intensive, requiring diesel-powered equipment, replacement materials, and added energy use. And as households switch to electric heat pumps, the stakes of a prolonged outage rise.

To address the challenge, the team developed a specialized polymer coating that repels water and can be sprayed onto aluminum power lines. The coating contains nanofillers — particles hundreds of times smaller than a human hair — that give the surface a texture that makes water bead and drip off.

The effect is known as “superhydrophobicity,” says Shaan Jagani, a graduate student in the Department of Aeronautics and Astronautics. “And what that really means is water does not stay on the surface, and therefore water will not have the opportunity to nucleate down into ice.”

MITten — pronounced “mitten” — won the $10,000 first prize in the contest, hosted by DMSE on Nov. 10 at MIT, where audience presentations and poster sessions capped months of design and experimentation. Since 2007, MADMEC (the Making and Designing Materials Engineering Contest), funded by Dow and Saint-Gobain, has given students a chance to tackle real-world sustainability challenges, with each team receiving $1,000 to build and test their projects. Judges evaluated the teams’ work from conception to prototype.

MADMEC winners have gone on to succeed in major innovation competitions such as MassChallenge, and at least six startups — including personal cooling wristband maker Embr and vehicle-motion-control company ClearMotion — trace their roots to the contest.

Cold inspiration

The idea for the MITten project came in part from Bormann’s experience growing up in South Dakota, where winter outages were common. His home was heated by natural gas, but if grid-reliant heat pumps had warmed it in negative-zero winter months, a days-long outage would have been “really rough.”

“I love the part of sustainability that is focused on developing all these new technologies for electricity generation and usage, but also the distribution side of it shouldn’t be neglected, either,” Bormann says. “It’s important for all those to be growing synergistically, and to be paying attention to all aspects of it.”

And there’s an opportunity to make distribution infrastructure more durable: An estimated 50,000 miles of new power lines are planned over the next decade in the northern United States, where icing is a serious risk.

To test their coating, the team built an icing chamber to simulate rain and freezing conditions, comparing coated versus uncoated aluminum samples at –10 degrees Celsius (14 degrees Fahrenheit). They also dipped samples in liquid nitrogen to evaluate performance in extreme cold and simulated real-world stresses such as lines swaying in windstorms.

“We basically coated aluminum substrates and then bent them to demonstrate that the coating itself could accommodate very long strains,” Jagani says.

The team ran simulations to estimate that a typical outage affecting 20 percent of a region could cost about $7 million to repair. “But if you fully coat, say, 1,000 kilometers of line, you actually can save $1 million in just material costs,” says DMSE grad student Matthew Michalek. The team hopes to further refine the coating with more advanced materials and test them in a professional icing chamber.

Amber Velez, a graduate student in the Department of Mechanical Engineering, stressed the parameters of the contest — working within a $1,000 budget.

“I feel we did quite good work with quite a lot of legitimacy, but I think moving on, there is a lot of space that we could have more play in,” she says. “We’ve definitely not hit the ceiling yet, and I think there’s a lot of room to keep growing.”

Compostable electrodes, microwavable ceramics

The second-place, $6,000 prize went to Electrodiligent, which is designing a biodegradable, compostable alternative to electrodes used for heart monitoring. Their prototype uses a cellulose paper backing and a conductive gel made from gelatin, glycerin, and sodium chloride to carry the electric signal.

Comparing electrocardiogram (ECG) results, the team found their electrodes performed similarly to the 3M Red Dot standard. “We’re very optimistic about this result,” says Ethan Frey, a DMSE graduate student.

The invention aims to cut into the 3.6 tons of medical waste produced each day, but judges noted that adhesive electrodes are almost always incinerated for health and safety reasons, making the intended application a tough fit.

“But there’s a whole host of other directions the team could go in,” says Mike Tarkanian, senior lecturer in DMSE and coordinator of MADMEC.

The $4,000 third prize went to Cerawave, a team made up of mostly undergraduates and a member the team jokingly called a “token grad student,” working to make ceramics in an ordinary kitchen microwave. Traditional ceramic manufacturing requires high-temperature kilns, a major source of energy use and carbon emissions. Cerawave added silicon carbide to their ceramic mix to help it absorb microwave energy and fuse into a durable final product.

“We threw it on the ground a few times, and it didn’t break,” says Merrill Chiang, a junior in DMSE, drawing laughs from the audience. The team now plans to refine their recipe and overall ceramic-making process so that hobbyists — and even users in environments like the International Space Station — could create ceramic parts “without buying really expensive furnaces.”

The power of student innovation

Although it didn’t earn a prize, the contest’s most futuristic project was ReForm Designs, which aims to make reusable children’s furniture — expensive and quickly outgrown — from modular blocks made of mycelium, the root-like, growth-driving part of a mushroom. The team showed they could successfully produce mycelium blocks, but slow growth and sensitivity to moisture and temperature meant they didn’t yet have full furniture pieces to show judges.

The project still impressed DMSE senior David Miller, who calls the blocks “really intriguing,” with potential applications beyond furniture in manufacturing, construction, and consumer products.

“They adapt to the way we consume products, where a lot of us use products for one, two, three years before we throw them out,” Miller says. “Their capacity to be fully biodegradable and molded into any shape fills the need for certain kinds of additive manufacturing that requires certain shapes, while also being extremely sustainable.”

While the contest has produced successful startups, Tarkanian says MADMEC’s original goal — giving students a chance to get their hands dirty and pursue their own ideas — is thriving 18 years on, especially at a time when research budgets are being cut and science is under scrutiny.

“It gives students an opportunity to make things that are real and impactful to society,” he says. “So when you can build a prototype and say, ‘This is going to save X millions of dollars or X million pounds of waste,’ that value is obvious to everyone.”

Attendee Jinsung Kim, a postdoc in mechanical engineering, echoed Tarkanian’s comments, emphasizing the space set aside for innovative thinking.

“MADMEC creates the rare environment where students can experiment boldly, validate ideas quickly, and translate core scientific principles into solutions with real societal impact. To move society forward, we have to keep pushing the boundaries of technology and fundamental science,” he says.


MIT researchers “speak objects into existence” using AI and robotics

The speech-to-reality system combines 3D generative AI and robotic assembly to create objects on demand.


Generative AI and robotics are moving us ever closer to the day when we can ask for an object and have it created within a few minutes. In fact, MIT researchers have developed a speech-to-reality system, an AI-driven workflow that allows them to provide input to a robotic arm and “speak objects into existence,” creating things like furniture in as little as five minutes.  

With the speech-to-reality system, a robotic arm mounted on a table is able to receive spoken input from a human, such as “I want a simple stool,” and then construct the objects out of modular components. To date, the researchers have used the system to create stools, shelves, chairs, a small table, and even decorative items such as a dog statue.

“We’re connecting natural language processing, 3D generative AI, and robotic assembly,” says Alexander Htet Kyaw, an MIT graduate student and Morningside Academy for Design (MAD) fellow. “These are rapidly advancing areas of research that haven’t been brought together before in a way that you can actually make physical objects just from a simple speech prompt.”  

The idea started when Kyaw — a graduate student in the departments of Architecture and Electrical Engineering and Computer Science — took Professor Neil Gershenfeld’s course, “How to Make Almost Anything.” In that class, he built the speech-to-reality system. He continued working on the project at the MIT Center for Bits and Atoms (CBA), directed by Gershenfeld, collaborating with graduate students Se Hwan Jeon of the Department of Mechanical Engineering and Miana Smith of CBA.

The speech-to-reality system begins with speech recognition that processes the user’s request using a large language model, followed by 3D generative AI that creates a digital mesh representation of the object, and a voxelization algorithm that breaks down the 3D mesh into assembly components.

After that, geometric processing modifies the AI-generated assembly to account for fabrication and physical constraints associated with the real world, such as the number of components, overhangs, and connectivity of the geometry. This is followed by creation of a feasible assembly sequence and automated path planning for the robotic arm to assemble physical objects from user prompts.

By leveraging natural language, the system makes design and manufacturing more accessible to people without expertise in 3D modeling or robotic programming. And, unlike 3D printing, which can take hours or days, this system builds within minutes.

“This project is an interface between humans, AI, and robots to co-create the world around us,” Kyaw says. “Imagine a scenario where you say ‘I want a chair,’ and within five minutes a physical chair materializes in front of you.”

The team has immediate plans to improve the weight-bearing capability of the furniture by changing the means of connecting the cubes from magnets to more robust connections. 

“We’ve also developed pipelines for converting voxel structures into feasible assembly sequences for small, distributed mobile robots, which could help translate this work to structures at any size scale,” Smith says.

The purpose of using modular components is to eliminate the waste that goes into making physical objects by disassembling and then reassembling them into something different, for instance turning a sofa into a bed when you no longer need the sofa.

Because Kyaw also has experience using gesture recognition and augmented reality to interact with robots in the fabrication process, he is currently working on incorporating both speech and gestural control into the speech-to-reality system.

Leaning into his memories of the replicator in the “Star Trek” franchise and the robots in the animated film “Big Hero 6,” Kyaw explains his vision.

“I want to increase access for people to make physical objects in a fast, accessible, and sustainable manner,” he says. “I’m working toward a future where the very essence of matter is truly in your control. One where reality can be generated on demand.”

The team presented their paper “Speech to Reality: On-Demand Production using Natural Language, 3D Generative AI, and Discrete Robotic Assembly” at the Association for Computing Machinery (ACM) Symposium on Computational Fabrication (SCF ’25) held at MIT on Nov. 21. 


Cultivating confidence and craft across disciplines

Professors Rohit Karnik and Nathan Wilmers are honored as “Committed to Caring.”


Both Rohit Karnik and Nathan Wilmers personify the type of mentorship that any student would be fortunate to receive — one rooted in intellectual rigor and grounded in humility, empathy, and personal support. They show that transformative academic guidance is not only about solving research problems, but about lifting up the people working on them.

Whether it’s Karnik’s quiet integrity and commitment to scientific ethics, or Wilmers’ steadfast encouragement of his students in the face of challenges, both professors cultivate spaces where students are not only empowered to grow as researchers, but affirmed as individuals. Their mentees describe feeling genuinely seen and supported; mentored not just in theory or technique, but in resilience. It’s this attention to the human element that leaves a lasting impact.

Professors Karnik and Wilmers are two of the 2023–25 Committed to Caring cohort who are cultivating confidence and craft across disciplines. For MIT graduate students, the Committed to Caring program recognizes those who go above and beyond.

Rohit Karnik: Rooted in rigor, guided by care

Rohit Karnik is Abdul Latif Jameel Professor in the Department of Mechanical Engineering at MIT, where he leads the Microfluidics and Nanofluidics Research Group and serves as director of the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS). His research explores the physics of micro- and nanofluidic flows and systems. Applications of his work include the development of water filters, portable diagnostic tools, and sensors for environmental monitoring. 

Karnik is genuinely excited about his students’ ideas, and open to their various academic backgrounds. He validates students by respecting their research, encouraging them to pursue their interests, and showing enthusiasm for their exploration within mechanical engineering and beyond.

One student reflected on the manner in which Karnik helped them feel more confident in their academic journey. When a student from a non-engineering field joined the mechanical engineering graduate program, Karnik never viewed their background as a barrier to success. The student wrote, “from the start, he was enthusiastic about my interdisciplinarity and the perspective I could bring to the lab.”

He allowed the student to take remedial undergraduate classes to learn engineering basics, provided guidance on leveraging their previous academic background, and encouraged them to write grants and apply for fellowships that would support their interdisciplinary work. In addition to these concrete supports, Karnik also provided the student with the freedom to develop their own ideas, offering constructive, realistic feedback on what was attainable. 

“This transition took time, and Karnik honored that, prioritizing my growth in a completely new field over getting quick results,” the nominator reflected. Ultimately, Karnik’s mentorship, patience, and thoughtful encouragement led the student to excel in the engineering field.

Karnik encourages his advisees to explore their interests in mechanical engineering and beyond. This holistic approach extends beyond academics and into Karnik’s view of his students as whole individuals. One student wrote that he treats them as complete humans, with ambitions, aspirations, and passions worthy of his respect and consideration — and remains truly selfless in his commitment to their growth and success.

Karnik emphasizes that “it’s important to have dreams,” regularly encouraging his mentees to take advantage of opportunities that align with their goals and values. This sentiment is felt deeply by his students, with one nominator sharing that Karnik “encourag[ed] me to think broadly and holistically about my life, which has helped me structure and prioritize my time at MIT.”

Nathan Wilmers: Cultivating confidence, craft, and care

Nathan Wilmers is the Sarofim Family Career Development Associate Professor of Work and Organizations at MIT Sloan School of Management. His research spans wage and earnings inequality, economic sociology, and the sociology of labor. He is also affiliated with the Institute for Work and Employment Research, and the Economic Sociology program at Sloan. Wilmers studies wage and earnings inequality, economic sociology, and the sociology of labor, bringing insights from economic sociology to the study of labor markets and the wage structure.

A remarkable mentor, Wilmers is known for guiding his students through different projects while also teaching them more broadly about the system of academia. As one nominator illustrates, “he … helped me learn the ‘tacit’ knowledge to understand how to write a paper,” while also emphasizing the learning process of the PhD as a whole, and never reprimanding any mistakes along the way. 

Students say that Wilmers “reassures us that making mistakes is a natural part of the learning process and encourages us to continuously check, identify, and rectify them.” He welcomes all questions without judgment, and generously invests his time and patience in teaching students.

Wilmers is a strong advocate for his students, both academically and personally. He emphasizes the importance of learning, growth, and practical experience, rather than solely focusing on scholarly achievements and goals. Students feel this care, describing “an environment that maximizes learning opportunities and fosters the development of skills,” allowing them to truly collaborate rather than simply aim for the “right” answers.

In addition to his role in the classroom and lab, Wilmers also provides informal guidance to advisees, imparting valuable knowledge about the academic system, emphasizing the significance of networking, and sharing insider information. 

“Nate’s down-to-earth nature is evident in his accessibility to students,” expressed one nominator, who wrote that “sometimes we can freely approach his office without an appointment and receive valuable advice on both work-related and personal matters.” Moreover, Wilmers prioritizes his advisees’ career advancement, dedicating a substantial amount of time to providing feedback on thesis projects, and even encouraging students to take a lead in publishing research.

True mentorship often lies in the patient, careful transmission of craft — the behind-the-scenes work that forms the backbone of rigorous research. “I care about the details,” says Wilmers, reflecting a philosophy shaped by his own graduate advisors. Wilmers’ mentors instilled in him a deep respect for the less-glamorous but essential elements of scholarly work: data cleaning, thoughtful analysis, and careful interpretation. These technical and analytical skills are where real learning happens, he believes. 

By modeling this approach with his own students, Wilmers creates a culture where precision and discipline are valued just as much as innovation. His mentorship is grounded in the belief that becoming a good researcher requires not just vision, but also an intimate understanding of process — of how ideas are sharpened through methodical practice, and how impact comes from doing the small things well. His thoughtful, detail-oriented mentorship leaves a lasting impression on his students.

A nominator acclaimed, “Nate’s strong enthusiasm for my research, coupled with his expressed confidence and affirmation of its value, served as a significant source of motivation for me to persistently pursue my ideas.”


Robots that spare warehouse workers the heavy lifting

Founded by MIT alumni, the Pickle Robot Company has developed machines that can autonomously load and unload trucks inside warehouses and logistic centers.


There are some jobs human bodies just weren’t meant to do. Unloading trucks and shipping containers is a repetitive, grueling task — and a big reason warehouse injury rates are more than twice the national average.

The Pickle Robot Company wants its machines to do the heavy lifting. The company’s one-armed robots autonomously unload trailers, picking up boxes weighing up to 50 pounds and placing them onto onboard conveyor belts for warehouses of all types.

The company name, an homage to The Apple Computer Company, hints at the ambitions of founders AJ Meyer ’09, Ariana Eisenstein ’15, SM ’16, and Dan Paluska ’97, SM ’00. The founders want to make the company the technology leader for supply chain automation.

The company’s unloading robots combine generative AI and machine-learning algorithms with sensors, cameras, and machine-vision software to navigate new environments on day one and improve performance over time. Much of the company’s hardware is adapted from industrial partners. You may recognize the arm, for instance, from car manufacturing lines — though you may not have seen it in bright pickle-green.

The company is already working with customers like UPS, Ryobi Tools, and Yusen Logistics to take a load off warehouse workers, freeing them to solve other supply chain bottlenecks in the process.

“Humans are really good edge-case problem solvers, and robots are not,” Paluska says. “How can the robot, which is really good at the brute force, repetitive tasks, interact with humans to solve more problems? Human bodies and minds are so adaptable, the way we sense and respond to the environment is so adaptable, and robots aren’t going to replace that anytime soon. But there’s so much drudgery we can get rid of.”

Finding problems for robots

Meyer and Eisenstein majored in computer science and electrical engineering at MIT, but they didn’t work together until after graduation, when Meyer started the technology consultancy Leaf Labs, which specializes in building embedded computer systems for things like robots, cars, and satellites.

“A bunch of friends from MIT ran that shop,” Meyer recalls, noting it’s still running today. “Ari worked there, Dan consulted there, and we worked on some big projects. We were the primary software and digital design team behind Project Ara, a smartphone for Google, and we worked on a bunch of interesting government projects. It was really a lifestyle company for MIT kids. But 10 years go by, and we thought, ‘We didn’t get into this to do consulting. We got into this to do robots.’”

When Meyer graduated in 2009, problems like robot dexterity seemed insurmountable. By 2018, the rise of algorithmic approaches like neural networks had brought huge advances to robotic manipulation and navigation.

To figure out what problem to solve with robots, the founders talked to people in industries as diverse as agriculture, food prep, and hospitality. At some point, they started visiting logistics warehouses, bringing a stopwatch to see how long it took workers to complete different tasks.

“In 2018, we went to a UPS warehouse and watched 15 guys unloading trucks during a winter night shift,” Meyer recalls. “We spoke to everyone, and not a single person had worked there for more than 90 days. We asked, ‘Why not?’ They laughed at us. They said, ‘Have you tried to do this job before?’”

It turns out warehouse turnover is one of the industry’s biggest problems, limiting productivity as managers constantly grapple with hiring, onboarding, and training.

The founders raised a seed funding round and built robots that could sort boxes because it was an easier problem that allowed them to work with technology like grippers and barcode scanners. Their robots eventually worked, but the company wasn’t growing fast enough to be profitable. Worse yet, the founders were having trouble raising money.

“We were desperately low on funds,” Meyer recalls. “So we thought, ‘Why spend our last dollar on a warm-up task?’”

With money dwindling, the founders built a proof-of-concept robot that could unload trucks reliably for about 20 seconds at a time and posted a video of it on YouTube. Hundreds of potential customers reached out. The interest was enough to get investors back on board to keep the company alive.

The company piloted its first unloading system for a year with a customer in the desert of California, sparing human workers from unloading shipping containers that can reach temperatures up to 130 degrees in the summer. It has since scaled deployments with multiple customers and gained traction among third-party logistics centers across the U.S.

The company’s robotic arm is made by the German industrial robotics giant KUKA. The robots are mounted on a custom mobile base with an onboard computing systems so they can navigate to docks and adjust their positions inside trailers autonomously while lifting. The end of each arm features a suction gripper that clings to packages and moves them to the onboard conveyor belt.

The company’s robots can pick up boxes ranging in size from 5-inch cubes to 24-by-30 inch boxes. The robots can unload anywhere from 400 to 1,500 cases per hour depending on size and weight. The company fine tunes pre-trained generative AI models and uses a number of smaller models to ensure the robot runs smoothly in every setting.

The company is also developing a software platform it can integrate with third-party hardware, from humanoid robots to autonomous forklifts.

“Our immediate product roadmap is load and unload,” Meyer says. “But we’re also hoping to connect these third-party platforms. Other companies are also trying to connect robots. What does it mean for the robot unloading a truck to talk to the robot palletizing, or for the forklift to talk to the inventory drone? Can they do the job faster? I think there’s a big network coming in which we need to orchestrate the robots and the automation across the entire supply chain, from the mines to the factories to your front door.”

“Why not us?”

The Pickle Robot Company employs about 130 people in its office in Charlestown, Massachusetts, where a standard — if green — office gives way to a warehouse where its robots can be seen loading boxes onto conveyor belts alongside human workers and manufacturing lines.

This summer, Pickle will be ramping up production of a new version of its system, with further plans to begin designing a two-armed robot sometime after that.

“My supervisor at Leaf Labs once told me ‘No one knows what they’re doing, so why not us?’” Eisenstein says. “I carry that with me all the time. I’ve been very lucky to be able to work with so many talented, experienced people in my career. They all bring their own skill sets and understanding. That’s a massive opportunity — and it’s the only way something as hard as what we’re doing is going to work.”

Moving forward, the company sees many other robot-shaped problems for its machines.

“We didn’t start out by saying, ‘Let’s load and unload a truck,’” Meyers says. “We said, ‘What does it take to make a great robot business?’ Unloading trucks is the first chapter. Now we’ve built a platform to make the next robot that helps with more jobs, starting in logistics but then ultimately in manufacturing, retail, and hopefully the entire supply chain.”


Alternate proteins from the same gene contribute differently to health and rare disease

New findings may help researchers identify genetic mutations that contribute to rare diseases, by studying when and how single genes produce multiple versions of proteins.


Around 25 million Americans have rare genetic diseases, and many of them struggle with not only a lack of effective treatments, but also a lack of good information about their disease. Clinicians may not know what causes a patient’s symptoms, know how their disease will progress, or even have a clear diagnosis. Researchers have looked to the human genome for answers, and many disease-causing genetic mutations have been identified, but as many as 70 percent of patients still lack a clear genetic explanation.

In a paper published in Molecular Cell on Nov. 7, Whitehead Institute for Biomedical Research member Iain Cheeseman, graduate student Jimmy Ly, and colleagues propose that researchers and clinicians may be able to get more information from patients’ genomes by looking at them in a different way.

The common wisdom is that each gene codes for one protein. Someone studying whether a patient has a mutation or version of a gene that contributes to their disease will therefore look for mutations that affect the “known” protein product of that gene. However, Cheeseman and others are finding that the majority of genes code for more than one protein. That means that a mutation that might seem insignificant because it does not appear to affect the known protein could nonetheless alter a different protein made by the same gene. Now, Cheeseman and Ly have shown that mutations affecting one or multiple proteins from the same gene can contribute differently to disease.

In their paper, the researchers first share what they have learned about how cells make use of the ability to generate different versions of proteins from the same gene. Then, they examine how mutations that affect these proteins contribute to disease. Through a collaboration with co-author Mark Fleming, the pathologist-in-chief at Boston Children’s Hospital, they provide two case studies of patients with atypical presentations of a rare anemia linked to mutations that selectively affect only one of two proteins produced by the gene implicated in the disease.

“We hope this work demonstrates the importance of considering whether a gene of interest makes multiple versions of a protein, and what the role of each version is in health and disease,” Ly says. “This information could lead to better understanding of the biology of disease, better diagnostics, and perhaps one day to tailored therapies to treat these diseases.”

Cells have several ways to make different versions of a protein, but the variation that Cheeseman and Ly study happens during protein production from genetic code. Cellular machines build each protein according to the instructions within a genetic sequence that begins at a “start codon” and ends at a “stop codon.” However, some genetic sequences contain more than one start codon, many of them hiding in plain sight. If the cellular machinery skips the first start codon and detects a second one, it may build a shorter version of the protein. In other cases, the machinery may detect a section that closely resembles a start codon at a point earlier in the sequence than its typical starting place, and build a longer version of the protein.

These events may sound like mistakes: the cell’s machinery accidentally creating the wrong version of the correct protein. To the contrary, protein production from these alternate starting places is an important feature of cell biology that exists across species. When Ly traced when certain genes evolved to produce multiple proteins, he found that this is a common, robust process that has been preserved throughout evolutionary history for millions of years.

Ly shows that one function this serves is to send versions of a protein to different parts of the cell. Many proteins contain ZIP code-like sequences that tell the cell’s machinery where to deliver them so the proteins can do their jobs. Ly found many examples in which longer and shorter versions of the same protein contained different ZIP codes and ended up in different places within the cell.

In particular, Ly found many cases in which one version of a protein ended up in mitochondria, structures that provide energy to cells, while another version ended up elsewhere. Because of the mitochondria’s role in the essential process of energy production, mutations to mitochondrial genes are often implicated in disease.

Ly wondered what would happen when a disease-causing mutation eliminates one version of a protein but leaves the other intact, causing the protein to only reach one of its two intended destinations. He looked through a database containing genetic information from people with rare diseases to see if such cases existed, and found that they did. In fact, there may be tens of thousands of such cases. However, without access to the people, Ly had no way of knowing what the consequences of this were in terms of symptoms and severity of disease.

Meanwhile, Cheeseman, who is also a professor of biology at MIT, had begun working with Boston Children’s Hospital to foster collaborations between Whitehead Institute and the hospital’s researchers and clinicians to accelerate the pathway from research discovery to clinical application. Through these efforts, Cheeseman and Ly met Fleming.

One group of Fleming’s patients have a type of anemia called SIFD — sideroblastic anemia with B-cell immunodeficiency, periodic fevers, and developmental delay — that is caused by mutations to the TRNT1 gene. TRNT1 is one of the genes Ly had identified as producing a mitochondrial version of its protein and another version that ends up elsewhere: in the nucleus.

Fleming shared anonymized patient data with Ly, and Ly found two cases of interest in the genetic data. Most of the patients had mutations that impaired both versions of the protein, but one patient had a mutation that eliminated only the mitochondrial version of the protein, while another patient had a mutation that eliminated only the nuclear version.

When Ly shared his results, Fleming revealed that both of those patients had very atypical presentations of SIFD, supporting Ly’s hypothesis that mutations affecting different versions of a protein would have different consequences. The patient who only had the mitochondrial version was anemic, but developmentally normal. The patient missing the mitochondrial version of the protein did not have developmental delays or chronic anemia, but did have other immune symptoms, and was not correctly diagnosed until his 50s. There are likely other factors contributing to each patient’s exact presentation of the disease, but Ly’s work begins to unravel the mystery of their atypical symptoms.

Cheeseman and Ly want to make more clinicians aware of the prevalence of genes coding for more than one protein, so they know to check for mutations affecting any of the protein versions that could contribute to disease. For example, several TRNT1 mutations that only eliminate the shorter version of the protein are not flagged as disease-causing by current assessment tools. Cheeseman lab researchers, including Ly and graduate student Matteo Di Bernardo, are now developing a new assessment tool for clinicians, called SwissIsoform, that will identify relevant mutations that affect specific protein versions, including mutations that would otherwise be missed.

“Jimmy and Iain’s work will globally support genetic disease variant interpretation and help with connecting genetic differences to variation in disease symptoms,” Fleming says. “In fact, we have recently identified two other patients with mutations affecting only the mitochondrial versions of two other proteins, who similarly have milder symptoms than patients with mutations that affect both versions.”

Long term, the researchers hope that their discoveries could aid in understanding the molecular basis of disease and in developing new gene therapies: Once researchers understand what has gone wrong within a cell to cause disease, they are better equipped to devise a solution. More immediately, the researchers hope that their work will make a difference by providing better information to clinicians and people with rare diseases.

“As a basic researcher who doesn’t typically interact with patients, there’s something very satisfying about knowing that the work you are doing is helping specific people,” Cheeseman says. “As my lab transitions to this new focus, I’ve heard many stories from people trying to navigate a rare disease and just get answers, and that has been really motivating to us, as we work to provide new insights into the disease biology.”


MIT School of Engineering faculty and staff receive awards in summer 2025

Faculty members and researchers were honored in recognition of their scholarship, service, and overall excellence.


Each year, faculty and researchers across the MIT School of Engineering are recognized with prestigious awards for their contributions to research, technology, society, and education. To celebrate these achievements, the school periodically highlights select honors received by members of its departments, institutes, labs, and centers. The following individuals were recognized in summer 2025:

Iwnetim Abate, the Chipman Career Development Professor and assistant professor in the Department of Materials Science and Engineering, was honored as one of MIT Technology Review’s 2025 Innovators Under 35. He was recognized for his research on sodium-ion batteries and ammonia production.

Daniel G. Anderson, the Joseph R. Mares (1924) Professor in the Department of Chemical Engineering and the Institute of Medical Engineering and Science (IMES), received the 2025 AIChE James E. Bailey Award. The award honors outstanding contributions in biological engineering and commemorates the pioneering work of James Bailey.

Regina Barzilay, the School of Engineering Distinguished Professor for AI and Health in the Department of Electrical Engineering and Computer Science (EECS), was named to Time’s AI100 2025 list, recognizing her groundbreaking work in AI and health.

Richard D. Braatz, the Edwin R. Gilliland Professor in the Department of Chemical Engineering, received the 2025 AIChE CAST Distinguished Service Award. The award recognizes exceptional service and leadership within the Computing and Systems Technology Division of AIChE.

Rodney Brooks, the Panasonic Professor of Robotics, Emeritus in the Department of Electrical Engineering and Computer Science, was elected to the National Academy of Sciences, one of the highest honors in scientific research.

Arup K. Chakraborty, the John M. Deutch (1961) Institute Professor in the Department of Chemical Engineering and IMES, received the 2025 AIChE Alpha Chi Sigma Award. This award honors outstanding accomplishments in chemical engineering research over the past decade.

Connor W. Coley, the Class of 1957 Career Development Professor and associate professor in the departments of Chemical Engineering and EECS, received the 2025 AIChE CoMSEF Young Investigator Award for Modeling and Simulation. The award recognizes outstanding research in computational molecular science and engineering. Coley was also one of 74 highly accomplished, early-career engineers selected to participate in the Grainger Foundation Frontiers of Engineering Symposium, a signature activity of the National Academy of Engineering.

Henry Corrigan-Gibbs, the Douglas Ross (1954) Career Development Professor of Software Technology and associate professor in the Department of EECS, received the Google ML and Systems Junior Faculty Award, presented to assistant professors who are leading the analysis, design and implementation of efficient, scalable, secure, and trustworthy computing systems.

Christina Delimitrou, the KDD Career Development Professor in Communications and Technology and associate professor in the Department of EECS, received the Google ML and Systems Junior Faculty Award. The award supports assistant professors advancing scalable and trustworthy computing systems for machine learning and cloud computing. Delimitrou also received the Google ML and Systems Junior Faculty Award, presented to assistant professors who are leading the analysis, design, and implementation of efficient, scalable, secure, and trustworthy computing systems.

Priya Donti, the Silverman (1968) Family Career Development Professor and assistant professor in the Department of EECS, was named to Time’s AI100 2025 list, which honors innovators reshaping the world through artificial intelligence.

Joel Emer, a professor of the practice in the Department of EECS, received the Alan D. Berenbaum Distinguished Service Award from ACM SIGARCH. He was honored for decades of mentoring and leadership in the computer architecture community.

Roger Greenwood Mark, the Distinguished Professor of Health Sciences and Technology, Emeritus in IMES, received the IEEE Biomedical Engineering Award for leadership in ECG signal processing and global dissemination of curated biomedical and clinical databases, thereby accelerating biomedical research worldwide.

Ali Jadbabaie, the JR East Professor and head of the Department of Civil and Environmental Engineering, received the 2025 Multidisciplinary University Research Initiative (MURI) award for research projects in areas of critical importance to national defense.

Yoon Kim, associate professor in the Department of EECS, received the Google ML and Systems Junior Faculty Award, presented to assistant professors who are leading the analysis, design, and implementation of efficient, scalable, secure, and trustworthy computing systems.

Mathias Kolle, an associate professor in the Department of Mechanical Engineering, received the 2025 Multidisciplinary University Research Initiative (MURI) award for research projects in areas of critical importance to national defense.

Muriel Médard, the NEC Professor of Software Science and Engineering in the Department of EECS, was elected an International Fellow of the United Kingdom's Royal Academy of Engineering. The honor recognizes exceptional contributions to engineering and technology across sectors.

Pablo Parrilo, the Joseph F. and Nancy P. Keithley Professor in Electrical Engineering in the Department of EECS, received the 2025 INFORMS Computing Society Prize. The award honors outstanding contributions at the interface of computing and operations research. Parrilo was recognized for pioneering work on accelerating gradient descent through stepsize hedging, introducing concepts such as Silver Stepsizes and recursive gluing.

Nidhi Seethapathi, the Frederick A. (1971) and Carole J. Middleton Career Development Professor of Neuroscience and assistant professor in the Department of EECS, was named to MIT Technology Review’s “2025 Innovators Under 35” list. The honor celebrates early-career scientists and entrepreneurs driving real-world impact.

Justin Solomon, an associate professor in the Department of EECS, was named a 2025 Schmidt Science Polymath. The award supports novel, early-stage research across disciplines, including acoustics and climate simulation.

Martin Staadecker, a research assistant in the Sustainable Supply Chain Lab, received the MIT-GE Vernova Energy and Climate Alliance Technology and Policy Program Project Award. The award recognizes his work on Scope 3 emissions and sustainable supply chain practices.

Antonio Torralba, the Delta Electronics Professor and faculty head of AI+D in the Department of EECS, received the 2025 Multidisciplinary University Research Initiative (MURI) award for research projects in areas of critical importance to national defense.

Ryan Williams, a professor in the Department of EECS, received the Best Paper Award at STOC 2025 for his paper “Simulating Time With Square-Root Space,” recognized for its technical merit and originality. Williams was also selected as a Member of the Institute for Advanced Study for the 2025–26 academic year. This prestigious fellowship recognizes the significance of these scholars' work, and it is an opportunity to advance their research and exchange ideas with scholars from around the world.

Gioele Zardini, the Rudge (1948) and Nancy Allen Career Development Professor in the Department of Civil and Environmental Engineering, received the 2025 DARPA Young Faculty Award. The award supports rising stars among early-career faculty, helping them develop research ideas aligned with national security needs.


Revisiting a revolution through poetry

In “American Independence in verse,” MIT philosopher Brad Skow uses poems to explore the American Revolution from multiple perspectives.


There are several narratives surrounding the American Revolution, a well-traveled and -documented series of events leading to the drafting and signing of the Declaration of Independence and the war that followed. 

MIT philosopher Brad Skow is taking a new approach to telling this story: a collection of 47 poems about the former American colonies’ journey from England’s imposition of the Stamp Act in 1765 to the war for America’s independence that began in 1775.

When asked why he chose poetry to retell the story, Skow, the Laurence S. Rockefeller Professor in the Department of Linguistics and Philosophy, said he “wanted to take just the great bits of these speeches and writings, while maintaining their intent and integrity.” Poetry, Skow argues, allows for that kind of nuance and specificity.

American Independence in Verse,” published by Pentameter Press, traces a story of America’s origins through a collection of vignettes featuring some well-known characters, like politician and orator Patrick Henry, alongside some lesser-known but no less important ones, like royalist and former chief justice of North Carolina Martin Howard. Each is rendered in blank verse, a nursery-style rhyme, or free verse. 

The book is divided into three segments: “Taxation Without Representation,” “Occupation and Massacre,” and “War and Independence.” Themes like freedom, government, and authority, rendered in a style of writing and oratory seldom seen today, lent themselves to being reimagined as poems. “The options available with poetic license offer opportunities for readers that might prove more difficult with prose,” Skow reports.

Skow based each of the poems on actual speeches, letters, pamphlets, and other printed materials produced by people on both sides of the debate about independence. “While reviewing a variety of primary sources for the book, I began to see the poetry in them,” he says. 

In the poem “Everywhere, the spirit of equality prevails,” during an “Interlude” between the “Occupation and Massacre” and “War and Independence” sections of the book, British commissioner of customs Henry Hulton, writing to Robert Nicholson in Liverpool, England, describes the America he experienced during a trip with his wife:

The spirit of equality prevails.

Regarding social differences, they’ve no

Notion of rank, and will show more respect

To one another than to those above them.

They’ll ask a thousand strange impertinent 

Questions, sit down when they should wait at a table,

React with puzzlement when you do not

Invite your valet to come share your meal.

Here, Skow, using Hulton’s words, illustrates the tension between agreed-upon social conventions — remnants of the Old World — and the society being built in the New World that animates a portion of the disconnect leading both toward war. “These writings are really powerful, and poetry offers a way to convey that power,” Skow says.

The journey to the printed page 

Skow’s interest in exploring the American Revolution came, in part, from watching the Tony Award-winning play “Hamilton.” The book ends where the play begins. “It led me to want to learn more,” he says of the play and his experience watching it. “Its focus on the Revolution made the era more exciting for me.”

While conducting research for another poetry project, Skow read an interview with American diplomat, inventor, and publisher Benjamin Franklin in the House of Commons conducted in 1766. “There were lots of amazing poetic moments in the interview,” he says. Skow began reading additional pamphlets, letters, and other writings, disconnecting his work as a philosopher from the research that would yield the book.

“I wanted to remove my philosopher hat with this project,” he says. “Poetry can encourage ambiguity and, unlike philosophy, can focus on emotional and non-rational connections between ideas.” 

Although eager to approach the work as a poet and author, rather than a philosopher, Skow discovered that more primary sources than he expected were themselves often philosophical treatises. “Early in the resistance movement there were sophisticated arguments, often printed in newspapers, that it was unjust to tax the colonies without granting them representation in Parliament,” he notes. 

A series of new perspectives and lessons

Skow made some discoveries that further enhanced his passion for the project. “Samuel Adams is an important figure who isn’t as well-known as he should be,” he says. “I wanted to raise his profile.”

Skow also notes that American separatists used strong-arm tactics to “encourage” support for independence, and that prevailing narratives regarding America and its eventual separation from England are more complex and layered than we might believe. “There were arguments underway about legitimate forms of government and which kind of government was right,” he says, “and many Americans wanted to retain the existing relationship with England.”

Skow says the American Revolution is a useful benchmark when considering subsequent political movements, a notion he hopes readers will take away from the book. “The book is meant to be fun and not just a collection of dry, abstract ideas,” he believes. 

“There’s a simple version of the independence story we tell when we’re in a hurry; and there is the more complex truth, printed in long history books,” he continues. “I wanted to write something that was both short and included a variety of perspectives.”

Skow believes the book and its subjects are a testament to ideas he’d like to see return to political and practical discourse. “The ideals around which this country rallied for its independence are still good ideals, and the courage the participants exhibited is still worth admiring,” he says.


What’s the best way to expand the US electricity grid?

A study by MIT researchers illuminates choices about reliability, cost, and emissions.


Growing energy demand means the U.S. will almost certainly have to expand its electricity grid in coming years. What’s the best way to do this? A new study by MIT researchers examines legislation introduced in Congress and identifies relative tradeoffs involving reliability, cost, and emissions, depending on the proposed approach.

The researchers evaluated two policy approaches to expanding the U.S. electricity grid: One would concentrate on regions with more renewable energy sources, and the other would create more interconnections across the country. For instance, some of the best untapped wind-power resources in the U.S. lie in the center of the country, so one type of grid expansion would situate relatively more grid infrastructure in those regions. Alternatively, the other scenario involves building more infrastructure everywhere in roughly equal measure, which the researchers call the “prescriptive” approach. How does each pencil out?

After extensive modeling, the researchers found that a grid expansion could make improvements on all fronts, with each approach offering different advantages. A more geographically unbalanced grid buildout would be 1.13 percent less expensive, and would reduce carbon emissions by 3.65 percent compared to the prescriptive approach. And yet, the prescriptive approach, with more national interconnection, would significantly reduce power outages due to extreme weather, among other things.

“There’s a tradeoff between the two things that are most on policymakers’ minds: cost and reliability,” says Christopher Knittel, an economist at the MIT Sloan School of Management, who helped direct the research. “This study makes it more clear that the more prescriptive approach ends up being better in the face of extreme weather and outages.”

The paper, “Implications of Policy-Driven Transmission Expansion on Costs, Emissions and Reliability in the United States,” is published today in Nature Energy.

The authors are Juan Ramon L. Senga, a postdoc in the MIT Center for Energy and Environmental Policy Research; Audun Botterud, a principal research scientist in the MIT Laboratory for Information and Decision Systems; John E. Parson, the deputy director for research at MIT’s Center for Energy and Environmental Policy Research; Drew Story, the managing director at MIT’s Policy Lab; and Knittel, who is the George P. Schultz Professor at MIT Sloan, and associate dean for climate and sustainability at MIT.

The new study is a product of the MIT Climate Policy Center, housed within MIT Sloan and committed to bipartisan research on energy issues. The center is also part of the Climate Project at MIT, founded in 2024 as a high-level Institute effort to develop practical climate solutions.

In this case, the project was developed from work the researchers did with federal lawmakers who have introduced legislation aimed at bolstering and expanding the U.S. electric grid. One of these bills, the BIG WIRES Act, co-sponsored by Sen. John Hickenlooper of Colorado and Rep. Scott Peters of California, would require each transmission region in the U.S. to be able to send at least 30 percent of its peak load to other regions by 2035.

That would represent a substantial change for a national transmission scenario where grids have largely been developed regionally, without an enormous amount of national oversight.

“The U.S. grid is aging and it needs an upgrade,” Senga says. “Implementing these kinds of policies is an important step for us to get to that future where we improve the grid, lower costs, lower emissions, and improve reliability. Some progress is better than none, and in this case, it would be important.”

To conduct the study, the researchers looked at how policies like the BIG WIRES Act would affect energy distribution. The scholars used a model of energy generation developed at the MIT Energy Initiative — the model is called “Gen X” — and examined the changes proposed by the legislation.

With a 30 percent level of interregional connectivity, the study estimates, the number of outages due to extreme cold would drop by 39 percent, for instance, a substantial increase in reliability. That would help avoid scenarios such as the one Texas experienced in 2021, when winter storms damaged distribution capacity.

“Reliability is what we find to be most salient to policymakers,” Senga says.

On the other hand, as the paper details, a future grid that is “optimized” with more transmission capacity near geographic spots of new energy generation would be less expensive.

“On the cost side, this kind of optimized system looks better,” Senga says.

A more geographically imbalanced grid would also have a greater impact on reducing emissions. Globally, the levelized cost of wind and solar dropped by 89 percent and 69 percent, respectively, from 2010 to 2022, meaning that incorporating less-expensive renewables into the grid would help with both cost and emissions.

“On the emissions side, a priori it’s not clear the optimized system would do better, but it does,” Knittel says. “That’s probably tied to cost, in the sense that it’s building more transmission links to where the good, cheap renewable resources are, because they’re cheap. Emissions fall when you let the optimizing action take place.”

To be sure, these two differing approaches to grid expansion are not the only paths forward. The study also examines a hybrid approach, which involves both national interconnectivity requirements and local buildouts based around new power sources on top of that. Still, the model does show that there may be some tradeoffs lawmakers will want to consider when developing and considering future grid legislation.

“You can find a balance between these factors, where you’re still going to still have an increase in reliability while also getting the cost and emission reductions,” Senga observes.

For his part, Knittel emphasizes that working with legislation as the basis for academic studies, while not generally common, can be productive for everyone involved. Scholars get to apply their research tools and models to real-world scenarios, and policymakers get a sophisticated evaluation of how their proposals would work.

“Compared to the typical academic path to publication, this is different, but at the Climate Policy Center, we’re already doing this kind of research,” Knittel says. 


A smarter way for large language models to think about hard problems

This new technique enables LLMs to dynamically adjust the amount of computation they use for reasoning, based on the difficulty of the question.


To make large language models (LLMs) more accurate when answering harder questions, researchers can let the model spend more time thinking about potential solutions.

But common approaches that give LLMs this capability set a fixed computational budget for every problem, regardless of how complex it is. This means the LLM might waste computational resources on simpler questions or be unable to tackle intricate problems that require more reasoning.

To address this, MIT researchers developed a smarter way to allocate computational effort as the LLM solves a problem. Their method enables the model to dynamically adjust its computational budget based on the difficulty of the question and the likelihood that each partial solution will lead to the correct answer.

The researchers found that their new approach enabled LLMs to use as little as one-half the computation as existing methods, while achieving comparable accuracy on a range of questions with varying difficulties. In addition, their method allows smaller, less resource-intensive LLMs to perform as well as or even better than larger models on complex problems.

By improving the reliability and efficiency of LLMs, especially when they tackle complex reasoning tasks, this technique could reduce the energy consumption of generative AI systems and enable the use of LLMs in more high-stakes and time-sensitive applications.

“The computational cost of inference has quickly become a major bottleneck for frontier model providers, and they are actively trying to find ways to improve computational efficiency per user queries. For instance, the recent GPT-5.1 release highlights the efficacy of the ‘adaptive reasoning’ approach our paper proposes. By endowing the models with the ability to know what they don’t know, we can enable them to spend more compute on the hardest problems and most promising solution paths, and use far fewer tokens on easy ones. That makes reasoning both more reliable and far more efficient,” says Navid Azizan, the Alfred H. and Jean M. Hayes Career Development Assistant Professor in the Department of Mechanical Engineering and the Institute for Data, Systems, and Society (IDSS), a principal investigator of the Laboratory for Information and Decision Systems (LIDS), and the senior author of a paper on this technique.

Azizan is joined on the paper by lead author Young-Jin Park, a LIDS/MechE graduate student; Kristjan Greenewald, a research scientist in the MIT-IBM Watson AI Lab; Kaveh Alim, an IDSS graduate student; and Hao Wang, a research scientist at the MIT-IBM Watson AI Lab and the Red Hat AI Innovation Team. The research is being presented this week at the Conference on Neural Information Processing Systems.

Computation for contemplation

A recent approach called inference-time scaling lets a large language model take more time to reason about difficult problems.

Using inference-time scaling, the LLM might generate multiple solution attempts at once or explore different reasoning paths, then choose the best ones to pursue from those candidates.

A separate model, known as a process reward model (PRM), scores each potential solution or reasoning path. The LLM uses these scores to identify the most promising ones.     

Typical inference-time scaling approaches assign a fixed amount of computation for the LLM to break the problem down and reason about the steps.

Instead, the researchers’ method, known as instance-adaptive scaling, dynamically adjusts the number of potential solutions or reasoning steps based on how likely they are to succeed, as the model wrestles with the problem.

“This is how humans solve problems. We come up with some partial solutions and then decide, should I go further with any of these, or stop and revise, or even go back to my previous step and continue solving the problem from there?” Wang explains.

To do this, the framework uses the PRM to estimate the difficulty of the question, helping the LLM assess how much computational budget to utilize for generating and reasoning about potential solutions.

At every step in the model’s reasoning process, the PRM looks at the question and partial answers and evaluates how promising each one is for getting to the right solution. If the LLM is more confident, it can reduce the number of potential solutions or reasoning trajectories to pursue, saving computational resources.

But the researchers found that existing PRMs often overestimate the model’s probability of success.

Overcoming overconfidence

“If we were to just trust current PRMs, which often overestimate the chance of success, our system would reduce the computational budget too aggressively. So we first had to find a way to better calibrate PRMs to make inference-time scaling more efficient and reliable,” Park says.

The researchers introduced a calibration method that enables PRMs to generate a range of probability scores rather than a single value. In this way, the PRM creates more reliable uncertainty estimates that better reflect the true probability of success.

With a well-calibrated PRM, their instance-adaptive scaling framework can use the probability scores to effectively reduce computation while maintaining the accuracy of the model’s outputs.

When they compared their method to standard inference-time scaling approaches on a series of mathematical reasoning tasks, it utilized less computation to solve each problem while achieving similar accuracy.

“The beauty of our approach is that this adaptation happens on the fly, as the problem is being solved, rather than happening all at once at the beginning of the process,” says Greenewald.

In the future, the researchers are interested in applying this technique to other applications, such as code generation and AI agents. They are also planning to explore additional uses for their PRM calibration method, like for reinforcement learning and fine-tuning.

“Human employees learn on the job — some CEOs even started as interns — but today’s agents remain largely static pieces of probabilistic software. Work like this paper is an important step toward changing that: helping agents understand what they don’t know and building mechanisms for continual self-improvement. These capabilities are essential if we want agents that can operate safely, adapt to new situations, and deliver consistent results at scale,” says Akash Srivastava, director and chief architect of Core AI at IBM Software, who was not involved with this work.

This work was funded, in part, by the MIT-IBM Watson AI Lab, the MIT-Amazon Science Hub, the MIT-Google Program for Computing Innovation, and MathWorks. 


MIT engineers design an aerial microrobot that can fly as fast as a bumblebee

With insect-like speed and agility, the tiny robot could someday aid in search-and-rescue missions.


In the future, tiny flying robots could be deployed to aid in the search for survivors trapped beneath the rubble after a devastating earthquake. Like real insects, these robots could flit through tight spaces larger robots can’t reach, while simultaneously dodging stationary obstacles and pieces of falling rubble.

So far, aerial microrobots have only been able to fly slowly along smooth trajectories, far from the swift, agile flight of real insects — until now.

MIT researchers have demonstrated aerial microrobots that can fly with speed and agility that is comparable to their biological counterparts. A collaborative team designed a new AI-based controller for the robotic bug that enabled it to follow gymnastic flight paths, such as executing continuous body flips.

With a two-part control scheme that combines high performance with computational efficiency, the robot’s speed and acceleration increased by about 450 percent and 250 percent, respectively, compared to the researchers’ best previous demonstrations.

The speedy robot was agile enough to complete 10 consecutive somersaults in 11 seconds, even when wind disturbances threatened to push it off course.

Animation of a flying, flipping microrobot

“We want to be able to use these robots in scenarios that more traditional quad copter robots would have trouble flying into, but that insects could navigate. Now, with our bioinspired control framework, the flight performance of our robot is comparable to insects in terms of speed, acceleration, and the pitching angle. This is quite an exciting step toward that future goal,” says Kevin Chen, an associate professor in the Department of Electrical Engineering and Computer Science (EECS), head of the Soft and Micro Robotics Laboratory within the Research Laboratory of Electronics (RLE), and co-senior author of a paper on the robot.

Chen is joined on the paper by co-lead authors Yi-Hsuan Hsiao, an EECS MIT graduate student; Andrea Tagliabue PhD ’24; and Owen Matteson, a graduate student in the Department of Aeronautics and Astronautics (AeroAstro); as well as EECS graduate student Suhan Kim; Tong Zhao MEng ’23; and co-senior author Jonathan P. How, the Ford Professor of Engineering in the Department of Aeronautics and Astronautics and a principal investigator in the Laboratory for Information and Decision Systems (LIDS). The research appears today in Science Advances.

An AI controller

Chen’s group has been building robotic insects for more than five years.

They recently developed a more durable version of their tiny robot, a microcassette-sized device that weighs less than a paperclip. The new version utilizes larger, flapping wings that enable more agile movements. They are powered by a set of squishy artificial muscles that flap the wings at an extremely fast rate.

But the controller — the “brain” of the robot that determines its position and tells it where to fly — was hand-tuned by a human, limiting the robot’s performance.

For the robot to fly quickly and aggressively like a real insect, it needed a more robust controller that could account for uncertainty and perform complex optimizations quickly.

Such a controller would be too computationally intensive to be deployed in real time, especially with the complicated aerodynamics of the lightweight robot.

To overcome this challenge, Chen’s group joined forces with How’s team and, together, they crafted a two-step, AI-driven control scheme that provides the robustness necessary for complex, rapid maneuvers, and the computational efficiency needed for real-time deployment.

“The hardware advances pushed the controller so there was more we could do on the software side, but at the same time, as the controller developed, there was more they could do with the hardware. As Kevin’s team demonstrates new capabilities, we demonstrate that we can utilize them,” How says.

For the first step, the team built what is known as a model-predictive controller. This type of powerful controller uses a dynamic, mathematical model to predict the behavior of the robot and plan the optimal series of actions to safely follow a trajectory.

While computationally intensive, it can plan challenging maneuvers like aerial somersaults, rapid turns, and aggressive body tilting. This high-performance planner is also designed to consider constraints on the force and torque the robot could apply, which is essential for avoiding collisions.

For instance, to perform multiple flips in a row, the robot would need to decelerate in such a way that its initial conditions are exactly right for doing the flip again.

“If small errors creep in, and you try to repeat that flip 10 times with those small errors, the robot will just crash. We need to have robust flight control,” How says.

They use this expert planner to train a “policy” based on a deep-learning model, to control the robot in real time, through a process called imitation learning. A policy is the robot’s decision-making engine, which tells the robot where and how to fly.

Essentially, the imitation-learning process compresses the powerful controller into a computationally efficient AI model that can run very fast.

The key was having a smart way to create just enough training data, which would teach the policy everything it needs to know for aggressive maneuvers.

“The robust training method is the secret sauce of this technique,” How explains.

The AI-driven policy takes robot positions as inputs and outputs control commands in real time, such as thrust force and torques.

Insect-like performance

In their experiments, this two-step approach enabled the insect-scale robot to fly 447 percent faster while exhibiting a 255 percent increase in acceleration. The robot was able to complete 10 somersaults in 11 seconds, and the tiny robot never strayed more than 4 or 5 centimeters off its planned trajectory.

“This work demonstrates that soft and microrobots, traditionally limited in speed, can now leverage advanced control algorithms to achieve agility approaching that of natural insects and larger robots, opening up new opportunities for multimodal locomotion,” says Hsiao.

The researchers were also able to demonstrate saccade movement, which occurs when insects pitch very aggressively, fly rapidly to a certain position, and then pitch the other way to stop. This rapid acceleration and deceleration help insects localize themselves and see clearly.

“This bio-mimicking flight behavior could help us in the future when we start putting cameras and sensors on board the robot,” Chen says.

Adding sensors and cameras so the microrobots can fly outdoors, without being attached to a complex motion capture system, will be a major area of future work.

The researchers also want to study how onboard sensors could help the robots avoid colliding with one another or coordinate navigation.

“For the micro-robotics community, I hope this paper signals a paradigm shift by showing that we can develop a new control architecture that is high-performing and efficient at the same time,” says Chen.

“This work is especially impressive because these robots still perform precise flips and fast turns despite the large uncertainties that come from relatively large fabrication tolerances in small-scale manufacturing, wind gusts of more than 1 meter per second, and even its power tether wrapping around the robot as it performs repeated flips,” says Sarah Bergbreiter, a professor of mechanical engineering at Carnegie Mellon University, who was not involved with this work.

“Although the controller currently runs on an external computer rather than onboard the robot, the authors demonstrate that similar, but less precise, control policies may be feasible even with the more limited computation available on an insect-scale robot. This is exciting because it points toward future insect-scale robots with agility approaching that of their biological counterparts,” she adds.

This research is funded, in part, by the National Science Foundation (NSF), the Office of Naval Research, Air Force Office of Scientific Research, MathWorks, and the Zakhartchenko Fellowship.


Staying stable

Whether they walk on two, four, or six legs, animals maintain stability by monitoring their body position and correcting errors with every step.


With every step we take, our brains are already thinking about the next one. If a bump in the terrain or a minor misstep has thrown us off balance, our stride may need to be altered to prevent a fall. Our two-legged posture makes maintaining stability particularly complex, which our brains solve in part by continually monitoring our bodies and adjusting where we place our feet.

Now, scientists at MIT have determined that animals with very different bodies likely use a shared strategy to balance themselves when they walk.

Nidhi Seethapathi, the Frederick A. and Carole J. Middleton Career Development Assistant Professor in Brain and Cognitive Sciences and Electrical Engineering and Computer Science at MIT, and K. Lisa Yang ICoN Center Fellow Antoine De Comite found that humans, mice, and fruit flies all use an error-correction process to guide foot placement and maintain stability while walking. Their findings, published Oct. 21 in the journal PNAS, could inform future studies exploring how the brain achieves stability during locomotion — bridging the gap between animal models and human balance.

Corrective action

Information must be integrated by the brain to keep us upright when we walk or run. Our steps must be continually adjusted according to the terrain, our desired speed, and our body’s current velocity and position in space.

“We rely on a combination of vestibular, proprioceptive, and visual information to build an estimate of our body’s state, determining if we are about to fall. Once we know the body’s state, we can decide which corrective actions to take,” explains Seethapathi, who is also an associate investigator at the McGovern Institute for Brain Research.

While humans are known to adjust where they place their feet to correct for errors, it is not known whether animals whose bodies are more stable do this, too.

To find out, Seethapathi and De Comite, who is a postdoc in Seethapathi’s and Guoping Feng's lab at the McGovern Institute, turned to locomotion data from mice, fruit flies, and humans shared by other labs, enabling an analysis across species that is otherwise challenging. Importantly, Seethapathi notes, all the animals they studied were walking in everyday natural environments, such as around a room — not on a treadmill or over unusual terrain.

Even in these ordinary circumstances, missteps and minor imbalances are common, and the team’s analysis showed that these errors predicted where all of the animals placed their feet in subsequent steps, regardless of whether they had two, four, or six legs.

One foot in front of another

By tracking the animals’ bodies and the step-by-step placement of their feet, Seethapathi and De Comite were able to find a measure of error that informs each animal’s next step. “By taking this comparative approach, we’ve forced ourselves to come up with a definition of error that generalizes across species,” Seethapathi says. “An animal moves with an expected body state for a particular speed. If it deviates from that ideal state, that deviation — at any given moment — is the error.”

“It was surprising to find similarities across these three species, which, at first sight, look very different,” says DeComite. “The methods themselves are surprising because we now have a pipeline to analyze foot placement and locomotion stability in any legged species,” explains DeComite, “which could lead similar analyses in even more species in the future.”

The team’s data suggest that in all of the species in the study, placement of the feet is guided both by an error-correction process and the speed at which an animal is traveling. Steps tend to lengthen and feet spend less time on the ground as animals pick up their pace, while the width of each step seems to change largely to compensate for body-state errors.

Now, Seethapathi says, we can look forward to future studies to explore how the dual control systems might be generated and integrated in the brain to keep moving bodies stable.

Studying how brains help animals move stably may also guide the development of more-targeted strategies to help people improve their balance and, ultimately, prevent falls.

“In elderly individuals and individuals with sensorimotor disorders, minimizing fall risk is one of the major functional targets of rehabilitation,” says Seethapathi. “A fundamental understanding of the error correction process that helps us remain stable will provide insight into why this process falls short in populations with neural deficits,” she says. 


New bioadhesive strategy can prevent fibrous encapsulation around device implants on peripheral nerves

Inspired by traditional acupuncture, the approach has potential to impact all implantable bioelectronic devices, enabling applications such as hypertension mitigation.


Peripheral nerves — the network connecting the brain, spinal cord, and central nervous system to the rest of the body — transmit sensory information, control muscle movements, and regulate automatic bodily functions. Bioelectronic devices implanted on these nerves offer remarkable potential for the treatment and rehabilitation of neurological and systemic diseases. However, because the body perceives these implants as foreign objects, they often trigger the formation of dense fibrotic tissue at bioelectronic device–tissue interfaces, which can significantly compromise device performance and longevity.

New research published in the journal Science Advances presents a robust bioadhesive strategy that establishes non-fibrotic bioelectronic interfaces on diverse peripheral nerves — including the occipital, vagus, deep peroneal, sciatic, tibial, and common peroneal nerves — for up to 12 weeks.

“We discovered that adhering the bioelectrodes to peripheral nerves can fully prevent the formation of fibrosis on the interfaces,” says Xuanhe Zhao, the Uncas and Helen Whitaker Professor, and professor of mechanical engineering and civil engineering at MIT. “We further demonstrated long-term, drug-free hypertension mitigation using non-fibrotic bioelectronics over four weeks, and ongoing.”

The approach inhibits immune cell infiltration at the device-tissue interface, thereby preventing the formation of fibrous capsules within the inflammatory microenvironment. In preclinical rodent models, the team demonstrated that the non-fibrotic, adhesive bioelectronic device maintained stable, long-term regulation of blood pressure.

“Our long-term blood pressure regulation approach was inspired by traditional acupuncture,” says Hyunmin Moon, lead author of the study and a postdoc in the Department of Mechanical Engineering. “The lower leg has long been used in hypertension treatment, and the deep peroneal nerve lies precisely at an acupuncture point. We were thrilled to see that stimulating this nerve achieved blood pressure regulation for the first time. The convergence of our non-fibrotic, adhesive bioelectronic device with this long-term regulation capability holds exciting promise for translational medicine.”

Importantly, after 12 weeks of implantation with continuous nerve stimulation, only minimal macrophage activity and limited deposition of smooth muscle actin and collagen were detected, underscoring the device’s potential to deliver long-term neuromodulation without triggering fibrosis. “The contrast between the immune response of the adhered device and that of the non-adhered control is striking,” says Bastien Aymon, a study co-author and a PhD candidate in mechanical engineering. “The fact that we can observe immunologically pristine interfaces after three months of adhesive implantation is extremely encouraging for future clinical translation.”

This work offers a broadly applicable strategy for all implantable bioelectronic systems by preventing fibrosis at the device interface, paving the way for more effective and long-lasting therapies such as hypertension mitigation.

Hypertension is a major contributor to cardiovascular diseases, the leading cause of death worldwide. Although medications are effective in many cases, more than 50 percent of patients remain hypertensive despite treatment — a condition known as resistant hypertension. Traditional carotid sinus or vagus nerve stimulation methods are often accompanied by side effects including apnea, bradycardia, cough, and paresthesia.

“In contrast, our non-fibrotic, adhesive bioelectronic device targeting the deep peroneal nerve enables long-term blood pressure regulation in resistant hypertensive patients without metabolic side effects,” says Moon.


Noninvasive imaging could replace finger pricks for people with diabetes

MIT engineers show they can accurately measure blood glucose by shining near-infrared light on the skin.


A noninvasive method for measuring blood glucose levels, developed at MIT, could save diabetes patients from having to prick their fingers several times a day.

The MIT team used Raman spectroscopy — a technique that reveals the chemical composition of tissues by shining near-infrared or visible light on them — to develop a shoebox-sized device that can measure blood glucose levels without any needles.

In tests in a healthy volunteer, the researchers found that the measurements from their device were similar to those obtained by commercial continuous glucose monitoring sensors that require a wire to be implanted under the skin. While the device presented in this study is too large to be used as a wearable sensor, the researchers have since developed a wearable version that they are now testing in a small clinical study.

“For a long time, the finger stick has been the standard method for measuring blood sugar, but nobody wants to prick their finger every day, multiple times a day. Naturally, many diabetic patients are under-testing their blood glucose levels, which can cause serious complications,” says Jeon Woong Kang, an MIT research scientist and the senior author of the study. “If we can make a noninvasive glucose monitor with high accuracy, then almost everyone with diabetes will benefit from this new technology.”

MIT postdoc Arianna Bresci is the lead author of the new study, which appears today in the journal Analytical Chemistry. Other authors include Peter So, director of the MIT Laser Biomedical Research Center (LBRC) and an MIT professor of biological engineering and mechanical engineering; and Youngkyu Kim and Miyeon Jue of Apollon Inc., a biotechnology company based in South Korea.

Noninvasive glucose measurement

While most diabetes patients measure their blood glucose levels by drawing blood and testing it with a glucometer, some use wearable monitors, which have a sensor that is inserted just under the skin. These sensors provide continuous glucose measurements from the interstitial fluid, but they can cause skin irritation and they need to be replaced every 10 to 15 days.

In hopes of creating wearable glucose monitors that would be more comfortable for patients, researchers in MIT’s LBRC have been pursuing noninvasive sensors based on Raman spectroscopy. This type of spectroscopy reveals the chemical composition of tissue or cells by analyzing how near-infrared light is scattered, or deflected, as it encounters different kinds of molecules.

In 2010, researchers at the LBRC showed that they could indirectly calculate glucose levels based on a comparison between Raman signals from the interstitial fluid that bathes skin cells and a reference measurement of blood glucose levels. While this approach produced reliable measurements, it wasn’t practical for translating to a glucose monitor.

More recently, the researchers reported a breakthrough that allowed them to directly measure glucose Raman signals from the skin. Normally, this glucose signal is too small to pick out from all of the other signals generated by molecules in tissue. The MIT team found a way to filter out much of the unwanted signal by shining near-infrared light onto the skin at a different angle from which they collected the resulting Raman signal.

The researchers obtained those measurements using equipment that was around the size of a desktop printer, and since then, they have been working on further shrinking the footprint of the device.

In their new study, they were able to create a smaller device by analyzing just three bands — spectral regions that correspond to specific molecular features — in the Raman spectrum.

Typically, a Raman spectrum may contain about 1,000 bands. However, the MIT team found that they could determine blood glucose levels by measuring just three bands — one from the glucose plus two background measurements. This approach allowed the researchers to reduce the amount and cost of equipment needed, allowing them to perform the measurement with a cost-effective device about the size of a shoebox.

“By refraining from acquiring the whole spectrum, which has a lot of redundant information, we go down to three bands selected from about 1,000,” Bresci says. “With this new approach, we can change the components commonly used in Raman-based devices, and save space, time, and cost.”

Toward a wearable sensor

In a clinical study performed at the MIT Center for Clinical Translation Research (CCTR), the researchers used the new device to take readings from a healthy volunteer over a four-hour period. As the subject rested their arm on top of the device, a near-infrared beam shone through a small glass window onto the skin to perform the measurement.

Each measurement takes a little more than 30 seconds, and the researchers took a new reading every five minutes.

During the study, the subject consumed two 75-gram glucose drinks, allowing the researchers to monitor significant changes in blood glucose concentration. They found that the Raman-based device showed accuracy levels similar to those of two commercially available, invasive glucose monitors worn by the subject.

Since finishing that study, the researchers have developed a smaller prototype, about the size of a cellphone, that they’re currently testing at the MIT CCTR as a wearable monitor in healthy and prediabetic volunteers. Next year, they plan to run a larger study working with a local hospital, which will include people with diabetes.

The researchers are also working on making the device even smaller, about the size of a watch. Additionally, they are exploring ways to ensure that the device can obtain accurate readings from people with different skin tones.

The research was funded by the National Institutes of Health, the Korean Technology and Information Promotion Agency for SMEs, and Apollon Inc.


MIT chemists synthesize a fungal compound that holds promise for treating brain cancer

Preliminary studies find derivatives of the compound, known as verticillin A, can kill some types of glioma cells.


For the first time, MIT chemists have synthesized a fungal compound known as verticillin A, which was discovered more than 50 years ago and has shown potential as an anticancer agent.

The compound has a complex structure that made it more difficult to synthesize than related compounds, even though it differed by only a couple of atoms.

“We have a much better appreciation for how those subtle structural changes can significantly increase the synthetic challenge,” says Mohammad Movassaghi, an MIT professor of chemistry. “Now we have the technology where we can not only access them for the first time, more than 50 years after they were isolated, but also we can make many designed variants, which can enable further detailed studies.”

In tests in human cancer cells, a derivative of verticillin A showed particular promise against a type of pediatric brain cancer called diffuse midline glioma. More tests will be needed to evaluate its potential for clinical use, the researchers say.

Movassaghi and Jun Qi, an associate professor of medicine at Dana-Farber Cancer Institute/Boston Children’s Cancer and Blood Disorders Center and Harvard Medical School, are the senior authors of the study, which appears today in the Journal of the American Chemical Society. Walker Knauss PhD ’24 is the lead author of the paper. Xiuqi Wang, a medicinal chemist and chemical biologist at Dana-Farber, and Mariella Filbin, research director in the Pediatric Neurology-Oncology Program at Dana-Farber/Boston Children’s Cancer and Blood Disorders Center, are also authors of the study.

A complex synthesis

Researchers first reported the isolation of verticillin A from fungi, which use it for protection against pathogens, in 1970. Verticillin A and related fungal compounds have drawn interest for their potential anticancer and antimicrobial activity, but their complexity has made them difficult to synthesize.

In 2009, Movassaghi’s lab reported the synthesis of (+)-11,11'-dideoxyverticillin A, a fungal compound similar to verticillin A. That molecule has 10 rings and eight stereogenic centers, or carbon atoms that have four different chemical groups attached to them. These groups have to be attached in a way that ensures they have the correct orientation, or stereochemistry, with respect to the rest of the molecule.

Once that synthesis was achieved, however, synthesis of verticillin A remained challenging, even though the only difference between verticillin A and (+)-11,11'-dideoxyverticillin A is the presence of two oxygen atoms.

“Those two oxygens greatly limit the window of opportunity that you have in terms of doing chemical transformations,” Movassaghi says. “It makes the compound so much more fragile, so much more sensitive, so that even though we had had years of methodological advances, the compound continued to pose a challenge for us.”

Both of the verticillin A compounds consist of two identical fragments that must be joined together to form a molecule called a dimer. To create (+)-11,11'-dideoxyverticillin A, the researchers had performed the dimerization reaction near the end of the synthesis, then added four critical carbon-sulfur bonds.

Yet when trying to synthesize verticillin A, the researchers found that waiting to add those carbon-sulfur bonds at the end did not result in the correct stereochemistry. As a result, the researchers had to rethink their approach and ended up creating a very different synthetic sequence.

“What we learned was the timing of the events is absolutely critical. We had to significantly change the order of the bond-forming events,” Movassaghi says.

The verticillin A synthesis begins with an amino acid derivative known as beta-hydroxytryptophan, and then step-by-step, the researchers add a variety of chemical functional groups, including alcohols, ketones, and amides, in a way that ensures the correct stereochemistry.

A functional group containing two carbon-sulfur bonds and a disulfide bond were introduced early on, to help control the stereochemistry of the molecule, but the sensitive disulfides had to be “masked” and protected as a pair of sulfides to prevent them from breakdown under subsequent chemical reactions. The disulfide-containing groups were then regenerated after the dimerization reaction.

“This particular dimerization really stands out in terms of the complexity of the substrates that we’re bringing together, which have such a dense array of functional groups and stereochemistry,” Movassaghi says.

The overall synthesis requires 16 steps from the beta-hydroxytryptophan starting material to verticillin A.

Killing cancer cells

Once the researchers had successfully completed the synthesis, they were also able to tweak it to generate derivates of verticillin A. Researchers at Dana-Farber then tested these compounds against several types of diffuse midline glioma (DMG), a rare brain tumor that has few treatment options.

The researchers found that the DMG cell lines most susceptible to these compounds were those that have high levels of a protein called EZHIP. This protein, which plays a role in the methylation of DNA, has been previously identified as a potential drug target for DMG.

“Identifying the potential targets of these compounds will play a critical role in further understanding their mechanism of action, and more importantly, will help optimize the compounds from the Movassaghi lab to be more target specific for novel therapy development,” Qi says.

The verticillin derivatives appear to interact with EZHIP in a way that increases DNA methylation, which induces the cancer cells to undergo programmed cell death. The compounds that were most successful at killing these cells were N-sulfonylated (+)-11,11'-dideoxyverticillin A and N-sulfonylated verticillin A. N-sulfonylation — the addition of a functional group containing sulfur and oxygen — makes the molecules more stable.

“The natural product itself is not the most potent, but it’s the natural product synthesis that brought us to a point where we can make these derivatives and study them,” Movassaghi says.

The Dana-Farber team is now working on further validating the mechanism of action of the verticillin derivatives, and they also hope to begin testing the compounds in animal models of pediatric brain cancers.

“Natural compounds have been valuable resources for drug discovery, and we will fully evaluate the therapeutic potential of these molecules by integrating our expertise in chemistry, chemical biology, cancer biology, and patient care. We have also profiled our lead molecules in more than 800 cancer cell lines, and will be able to understand their functions more broadly in other cancers,” Qi says.

The research was funded by the National Institute of General Medical Sciences, the Ependymoma Research Foundation, and the Curing Kids Cancer Foundation.


Inaugural UROP mixer draws hundreds of students eager to gain research experience

The Institute will commit up to $1 million in new funding to increase supply of UROPs.



More than 600 undergraduate students crowded into the Stratton Student Center on Oct. 28, for MIT’s first-ever Institute-wide Undergraduate Research Opportunities Program (UROP) mixer.

“At MIT, we believe in the transformative power of learning by doing, and there’s no better example than UROP,” says MIT President Sally Kornbluth, who attended the mixer with Provost Anantha Chandrakasan and Chancellor Melissa Nobles. “The energy at the inaugural UROP mixer was exhilarating, and I’m delighted that students now have this easy way to explore different paths to the frontiers of research.”

The event gave students the chance to explore internships and undergraduate research opportunities — in fields ranging from artificial intelligence to the life sciences to the arts, and beyond — all in one place, with approximately 150 researchers from labs available to discuss the projects and answer questions in real time. The offices of the Chancellor and Provost co-hosted the event, which the UROP office helped coordinate. 

First-year student Isabell Luo recently began a UROP project in the Living Matter lab led by Professor Rafael Gómez-Bombarelli, where she is benchmarking machine-learned interatomic potentials that simulate chemical reactions at the molecular level and exploring fine-tuning strategies to improve their accuracy. She’s passionate about AI and machine learning, eco-friendly design, and entrepreneurship, and was attending the UROP mixer to find more “real-world” projects to work on.

“I’m trying to dip my toes into different areas, which is why I’m at the mixer,” said Luo. “On the internet it would be so hard to find the right opportunities. It’s nice to have a physical space and speak to people from so many disciplines.”

More than nine out of every 10 members of MIT’s class of 2025 took part in a UROP before graduating. In recent years, approximately 3,200 undergraduates have participated in a UROP project each year. To meet the strong demand for UROPs, the Institute will commit up to $1 million in funding this year to create more of them. The funding will come from MIT’s schools and Office of the Provost. 

“UROPs have become an indispensable part of the MIT undergraduate education, providing hands-on experience that really helps students learn new ways to problem-solve and innovate,” says Chandrakasan. “I was thrilled to see so many students at the mixer — it was a testament to their willingness to roll up their sleeves and get to work on really tough challenges.”

Arielle Berman, a postdoc in the Raman Lab, was looking to recruit an undergraduate researcher for a project on sensor integration for muscle actuators for biohybrid robots — robots that include living parts. She spoke about how her own research experience as an undergraduate had shaped her career.

“It’s a really important event because we’re able to expose undergraduates to research,” says Berman. “I’m the first PhD in my family, so I wasn’t aware that research existed, or could be a career. Working in a research lab as an undergraduate student changed my life trajectory, and I’m happy to pass it forward and help students have experiences they wouldn’t have otherwise.”

The event drew students with interests as varied as the projects available. For first-year Nate Black, who plans to major in mechanical engineering, “I just wanted something to develop my interest in 3D printing and additive manufacturing.” First-year Akpandu Ekezie, who expects to major in Course 6-5 (Electrical Engineering with Computing), was interested in photonic circuits. “I’m looking mainly for EE-related things that are more hands-on,” he explained. “I want to get more physical experience.”

Nobles has a message for students considering a UROP project: Just go for it. “There’s a UROP for every student, regardless of experience,” she says. “Find something that excites you and give it a try.” She encourages students who weren’t able to attend the mixer, as well as those who did attend but still have questions, to get in touch with the UROP office.

First-year students Ruby Mykkanen and Aditi Deshpande attended the mixer together. Both were searching for UROP projects they could work on during Independent Activities Period in January. Deshpande also noted that the mixer was helpful for understanding “what research is being done at MIT.”

Said Mykkanen, “It’s fun to have it all in one place!”


Exploring how AI will shape the future of work

For PhD student Benjamin Manning, the future of work means grasping AI’s role on our behalf while transforming and accelerating social scientific discovery.


“MIT hasn’t just prepared me for the future of work — it’s pushed me to study it. As AI systems become more capable, more of our online activity will be carried out by artificial agents. That raises big questions: How should we design these systems to understand our preferences? What happens when AI begins making many of our decisions?”

These are some of the questions MIT Sloan School of Management PhD candidate Benjamin Manning is researching. Part of his work investigates how to design and evaluate artificial intelligence agents that act on behalf of people, and how their behavior shapes markets and institutions. 

Previously, he received a master’s degree in public policy from the Harvard Kennedy School and a bachelor’s in mathematics from Washington University in St. Louis. After working as a research assistant, Manning knew he wanted to pursue an academic career.

“There’s no better place in the world to study economics and computer science than MIT,” he says. “Nobel and Turing award winners are everywhere, and the IT group lets me explore both fields freely. It was my top choice — when I was accepted, the decision was clear.” 

After receiving his PhD, Manning hopes to secure a faculty position at a business school and do the same type of work that MIT Sloan professors — his mentors — do every day.

“Even in my fourth year, it still feels surreal to be an MIT student. I don’t think that feeling will ever fade. My mom definitely won’t ever get over telling people about it.”

Of his MIT Sloan experience, Manning says he didn’t know it was possible to learn so much so quickly. “It’s no exaggeration to say I learned more in my first year as a PhD candidate than in all four years of undergrad. While the pace can be intense, wrestling with so many new ideas has been incredibly rewarding. It’s given me the tools to do novel research in economics and AI — something I never imagined I’d be capable of.”

As an economist studying AI simulations of humans, for Manning, the future of work not only means understanding how AI acts on our behalf, but also radically improving and accelerating social scientific discovery.

“Another part of my research agenda explores how well AI systems can simulate human responses. I envision a future where researchers test millions of behavioral simulations in minutes, rapidly prototyping experimental designs, and identifying promising research directions before investing in costly human studies. This isn’t about replacing human insight, but amplifying it: Scientists can focus on asking better questions, developing theory, and interpreting results while AI handles the computational heavy lifting.”

He’s excited by the prospect: “We are possibly moving toward a world where the pace of understanding may get much closer to the speed of economic change.”


Artificial tendons give muscle-powered robots a boost

The new design from MIT engineers could pump up many biohybrid builds.


Our muscles are nature’s actuators. The sinewy tissue is what generates the forces that make our bodies move. In recent years, engineers have used real muscle tissue to actuate “biohybrid robots” made from both living tissue and synthetic parts. By pairing lab-grown muscles with synthetic skeletons, researchers are engineering a menagerie of muscle-powered crawlers, walkers, swimmers, and grippers.

But for the most part, these designs are limited in the amount of motion and power they can produce. Now, MIT engineers are aiming to give bio-bots a power lift with artificial tendons.

In a study appearing today in the journal Advanced Sciencethe researchers developed artificial tendons made from tough and flexible hydrogel. They attached the rubber band-like tendons to either end of a small piece of lab-grown muscle, forming a “muscle-tendon unit.” Then they connected the ends of each artificial tendon to the fingers of a robotic gripper.

When they stimulated the central muscle to contract, the tendons pulled the gripper’s fingers together. The robot pinched its fingers together three times faster, and with 30 times greater force, compared with the same design without the connecting tendons.

The researchers envision the new muscle-tendon unit can be fit to a wide range of biohybrid robot designs, much like a universal engineering element.

“We are introducing artificial tendons as interchangeable connectors between muscle actuators and robotic skeletons,” says lead author Ritu Raman, an assistant professor of mechanical engineering (MechE) at MIT. “Such modularity could make it easier to design a wide range of robotic applications, from microscale surgical tools to adaptive, autonomous exploratory machines.”

The study’s MIT co-authors include graduate students Nicolas Castro, Maheera Bawa, Bastien Aymon, Sonika Kohli, and Angel Bu; undergraduate Annika Marschner; postdoc Ronald Heisser; alumni Sarah J. Wu ’19, SM ’21, PhD ’24 and Laura Rosado ’22, SM ’25; and MechE professors Martin Culpepper and Xuanhe Zhao.

Muscle’s gains

Raman and her colleagues at MIT are at the forefront of biohybrid robotics, a relatively new field that has emerged in the last decade. They focus on combining synthetic, structural robotic parts with living muscle tissue as natural actuators.

“Most actuators that engineers typically work with are really hard to make small,” Raman says. “Past a certain size, the basic physics doesn’t work. The nice thing about muscle is, each cell is an independent actuator that generates force and produces motion. So you could, in principle, make robots that are really small.”

Muscle actuators also come with other advantages, which Raman’s team has already demonstrated: The tissue can grow stronger as it works out, and can naturally heal when injured. For these reasons, Raman and others envision that muscly droids could one day be sent out to explore environments that are too remote or dangerous for humans. Such muscle-bound bots could build up their strength for unforeseen traverses or heal themselves when help is unavailable. Biohybrid bots could also serve as small, surgical assistants that perform delicate, microscale procedures inside the body.

All these future scenarios are motivating Raman and others to find ways to pair living muscles with synthetic skeletons. Designs to date have involved growing a band of muscle and attaching either end to a synthetic skeleton, similar to looping a rubber band around two posts. When the muscle is stimulated to contract, it can pull the parts of a skeleton together to generate a desired motion.

But Raman says this method produces a lot of wasted muscle that is used to attach the tissue to the skeleton rather than to make it move. And that connection isn’t always secure. Muscle is quite soft compared with skeletal structures, and the difference can cause muscle to tear or detach. What’s more, it is often only the contractions in the central part of the muscle that end up doing any work — an amount that’s relatively small and generates little force.

“We thought, how do we stop wasting muscle material, make it more modular so it can attach to anything, and make it work more efficiently?” Raman says. “The solution the body has come up with is to have tendons that are halfway in stiffness between muscle and bone, that allow you to bridge this mechanical mismatch between soft muscle and rigid skeleton. They’re like thin cables that wrap around joints efficiently.”

“Smartly connected”

In their new work, Raman and her colleagues designed artificial tendons to connect natural muscle tissue with a synthetic gripper skeleton. Their material of choice was hydrogel — a squishy yet sturdy polymer-based gel. Raman obtained hydrogel samples from her colleague and co-author Xuanhe Zhao, who has pioneered the development of hydrogels at MIT. Zhao’s group has derived recipes for hydrogels of varying toughness and stretch that can stick to many surfaces, including synthetic and biological materials.

To figure out how tough and stretchy artificial tendons should be in order to work in their gripper design, Raman’s team first modeled the design as a simple system of three types of springs, each representing the central muscle, the two connecting tendons, and the gripper skeleton. They assigned a certain stiffness to the muscle and skeleton, which were previously known, and used this to calculate the stiffness of the connecting tendons that would be required in order to move the gripper by a desired amount.

From this modeling, the team derived a recipe for hydrogel of a certain stiffness. Once the gel was made, the researchers carefully etched the gel into thin cables to form artificial tendons. They attached two tendons to either end of a small sample of muscle tissue, which they grew using lab-standard techniques. They then wrapped each tendon around a small post at the end of each finger of the robotic gripper — a skeleton design that was developed by MechE professor Martin Culpepper, an expert in designing and building precision machines.

When the team stimulated the muscle to contract, the tendons in turn pulled on the gripper to pinch its fingers together. Over multiple experiments, the researchers found that the muscle-tendon gripper worked three times faster and produced 30 times more force compared to when the gripper is actuated just with a band of muscle tissue (and without any artificial tendons). The new tendon-based design also was able to keep up this performance over 7,000 cycles, or muscle contractions.

Overall, Raman saw that the addition of artificial tendons increased the robot’s power-to-weight ratio by 11 times, meaning that the system required far less muscle to do just as much work.

“You just need a small piece of actuator that’s smartly connected to the skeleton,” Raman says. “Normally, if a muscle is really soft and attached to something with high resistance, it will just tear itself before moving anything. But if you attach it to something like a tendon that can resist tearing, it can really transmit its force through the tendon, and it can move a skeleton that it wouldn’t have been able to move otherwise.”

The team’s new muscle-tendon design successfully merges biology with robotics, says biomedical engineer Simone Schürle-Finke, associate professor of health sciences and technology at ETH Zürich.

“The tough-hydrogel tendons create a more physiological muscle–tendon–bone architecture, which greatly improves force transmission, durability, and modularity,” says Schürle-Finke, who was not involved with the study. “This moves the field toward biohybrid systems that can operate repeatably and eventually function outside the lab.”

With the new artificial tendons in place, Raman’s group is moving forward to develop other elements, such as skin-like protective casings, to enable muscle-powered robots in practical, real-world settings.

This research was supported, in part, by the U.S. Department of Defense Army Research Office, the MIT Research Support Committee, and the National Science Foundation.


Researchers discover a shortcoming that makes LLMs less reliable

Large language models can learn to mistakenly link certain sentence patterns with specific topics — and may then repeat these patterns instead of reasoning.


Large language models (LLMs) sometimes learn the wrong lessons, according to an MIT study.

Rather than answering a query based on domain knowledge, an LLM could respond by leveraging grammatical patterns it learned during training. This can cause a model to fail unexpectedly when deployed on new tasks.

The researchers found that models can mistakenly link certain sentence patterns to specific topics, so an LLM might give a convincing answer by recognizing familiar phrasing instead of understanding the question.

Their experiments showed that even the most powerful LLMs can make this mistake.

This shortcoming could reduce the reliability of LLMs that perform tasks like handling customer inquiries, summarizing clinical notes, and generating financial reports.

It could also have safety risks. A nefarious actor could exploit this to trick LLMs into producing harmful content, even when the models have safeguards to prevent such responses.

After identifying this phenomenon and exploring its implications, the researchers developed a benchmarking procedure to evaluate a model’s reliance on these incorrect correlations. The procedure could help developers mitigate the problem before deploying LLMs.

“This is a byproduct of how we train models, but models are now used in practice in safety-critical domains far beyond the tasks that created these syntactic failure modes. If you’re not familiar with model training as an end-user, this is likely to be unexpected,” says Marzyeh Ghassemi, an associate professor in the MIT Department of Electrical Engineering and Computer Science (EECS), a member of the MIT Institute of Medical Engineering Sciences and the Laboratory for Information and Decision Systems, and the senior author of the study.

Ghassemi is joined by co-lead authors Chantal Shaib, a graduate student at Northeastern University and visiting student at MIT; and Vinith Suriyakumar, an MIT graduate student; as well as Levent Sagun, a research scientist at Meta; and Byron Wallace, the Sy and Laurie Sternberg Interdisciplinary Associate Professor and associate dean of research at Northeastern University’s Khoury College of Computer Sciences. A paper describing the work will be presented at the Conference on Neural Information Processing Systems.

Stuck on syntax

LLMs are trained on a massive amount of text from the internet. During this training process, the model learns to understand the relationships between words and phrases — knowledge it uses later when responding to queries.

In prior work, the researchers found that LLMs pick up patterns in the parts of speech that frequently appear together in training data. They call these part-of-speech patterns “syntactic templates.”

LLMs need this understanding of syntax, along with semantic knowledge, to answer questions in a particular domain.

“In the news domain, for instance, there is a particular style of writing. So, not only is the model learning the semantics, it is also learning the underlying structure of how sentences should be put together to follow a specific style for that domain,” Shaib explains.   

But in this research, they determined that LLMs learn to associate these syntactic templates with specific domains. The model may incorrectly rely solely on this learned association when answering questions, rather than on an understanding of the query and subject matter.

For instance, an LLM might learn that a question like “Where is Paris located?” is structured as adverb/verb/proper noun/verb. If there are many examples of sentence construction in the model’s training data, the LLM may associate that syntactic template with questions about countries.

So, if the model is given a new question with the same grammatical structure but nonsense words, like “Quickly sit Paris clouded?” it might answer “France” even though that answer makes no sense.

“This is an overlooked type of association that the model learns in order to answer questions correctly. We should be paying closer attention to not only the semantics but the syntax of the data we use to train our models,” Shaib says.

Missing the meaning

The researchers tested this phenomenon by designing synthetic experiments in which only one syntactic template appeared in the model’s training data for each domain. They tested the models by substituting words with synonyms, antonyms, or random words, but kept the underlying syntax the same.

In each instance, they found that LLMs often still responded with the correct answer, even when the question was complete nonsense.

When they restructured the same question using a new part-of-speech pattern, the LLMs often failed to give the correct response, even though the underlying meaning of the question remained the same.

They used this approach to test pre-trained LLMs like GPT-4 and Llama, and found that this same learned behavior significantly lowered their performance.

Curious about the broader implications of these findings, the researchers studied whether someone could exploit this phenomenon to elicit harmful responses from an LLM that has been deliberately trained to refuse such requests.

They found that, by phrasing the question using a syntactic template the model associates with a “safe” dataset (one that doesn’t contain harmful information), they could trick the model into overriding its refusal policy and generating harmful content.

“From this work, it is clear to me that we need more robust defenses to address security vulnerabilities in LLMs. In this paper, we identified a new vulnerability that arises due to the way LLMs learn. So, we need to figure out new defenses based on how LLMs learn language, rather than just ad hoc solutions to different vulnerabilities,” Suriyakumar says.

While the researchers didn’t explore mitigation strategies in this work, they developed an automatic benchmarking technique one could use to evaluate an LLM’s reliance on this incorrect syntax-domain correlation. This new test could help developers proactively address this shortcoming in their models, reducing safety risks and improving performance.

In the future, the researchers want to study potential mitigation strategies, which could involve augmenting training data to provide a wider variety of syntactic templates. They are also interested in exploring this phenomenon in reasoning models, special types of LLMs designed to tackle multi-step tasks.

“I think this is a really creative angle to study failure modes of LLMs. This work highlights the importance of linguistic knowledge and analysis in LLM safety research, an aspect that hasn’t been at the center stage but clearly should be,” says Jessy Li, an associate professor at the University of Texas at Austin, who was not involved with this work.

This work is funded, in part, by a Bridgewater AIA Labs Fellowship, the National Science Foundation, the Gordon and Betty Moore Foundation, a Google Research Award, and Schmidt Sciences.


MIT scientists debut a generative AI model that could create molecules addressing hard-to-treat diseases

BoltzGen generates protein binders for any biological target from scratch, expanding AI’s reach from understanding biology toward engineering it.


More than 300 people across academia and industry spilled into an auditorium to attend a BoltzGen seminar on Thursday, Oct. 30, hosted by the Abdul Latif Jameel Clinic for Machine Learning in Health (MIT Jameel Clinic). Headlining the event was MIT PhD student and BoltzGen’s first author Hannes Stärk, who had announced BoltzGen just a few days prior.

Building upon Boltz-2, an open-source biomolecular structure prediction model predicting protein binding affinity that made waves over the summer, BoltzGen (officially released on Sunday, Oct. 26.) is the first model of its kind to go a step further by generating novel protein binders that are ready to enter the drug discovery pipeline.

Three key innovations make this possible: first, BoltzGen’s ability to carry out a variety of tasks, unifying protein design and structure prediction while maintaining state-of-the-art performance. Next, BoltzGen’s built-in constraints are designed with feedback from wetlab collaborators to ensure the model creates functional proteins that don’t defy the laws of physics or chemistry. Lastly, a rigorous evaluation process tests the model on “undruggable” disease targets, pushing the limits of BoltzGen’s binder generation capabilities.

Most models used in industry or academia are capable of either structure prediction or protein design. Moreover, they’re limited to generating certain types of proteins that bind successfully to easy “targets.” Much like students responding to a test question that looks like their homework, as long as the training data looks similar to the target during binder design, the models often work. But existing methods are nearly always evaluated on targets for which structures with binders already exist, and end up faltering in performance when used on more challenging targets.

“There have been models trying to tackle binder design, but the problem is that these models are modality-specific,” Stärk points out. “A general model does not only mean that we can address more tasks. Additionally, we obtain a better model for the individual task since emulating physics is learned by example, and with a more general training scheme, we provide more such examples containing generalizable physical patterns.”

The BoltzGen researchers went out of their way to test BoltzGen on 26 targets, ranging from therapeutically relevant cases to ones explicitly chosen for their dissimilarity to the training data. 

This comprehensive validation process, which took place in eight wetlabs across academia and industry, demonstrates the model’s breadth and potential for breakthrough drug development.

Parabilis Medicines, one of the industry collaborators that tested BoltzGen in a wetlab setting, praised BoltzGen’s potential: “we feel that adopting BoltzGen into our existing Helicon peptide computational platform capabilities promises to accelerate our progress to deliver transformational drugs against major human diseases.”

While the open-source releases of Boltz-1, Boltz-2, and now BoltzGen (which was previewed at the 7th Molecular Machine Learning Conference on Oct. 22) bring new opportunities and transparency in drug development, they also signal that biotech and pharmaceutical industries may need to reevaluate their offerings. 

Amid the buzz for BoltzGen on the social media platform X, Justin Grace, a principal machine learning scientist at LabGenius, raised a question. “The private-to-open performance time lag for chat AI systems is [seven] months and falling,” Grace wrote in a post. “It looks to be even shorter in the protein space. How will binder-as-a-service co’s be able to [recoup] investment when we can just wait a few months for the free version?” 

For those in academia, BoltzGen represents an expansion and acceleration of scientific possibility. “A question that my students often ask me is, ‘where can AI change the therapeutics game?’” says senior co-author and MIT Professor Regina Barzilay, AI faculty lead for the Jameel Clinic and an affiliate of the Computer Science and Artificial Intelligence Laboratory (CSAIL). “Unless we identify undruggable targets and propose a solution, we won’t be changing the game,” she adds. “The emphasis here is on unsolved problems, which distinguishes Hannes’ work from others in the field.” 

Senior co-author Tommi Jaakkola, the Thomas Siebel Professor of Electrical Engineering and Computer Science who is affiliated with the Jameel Clinic and CSAIL, notes that "models such as BoltzGen that are released fully open-source enable broader community-wide efforts to accelerate drug design capabilities.”

Looking ahead, Stärk believes that the future of biomolecular design will be upended by AI models. “I want to build tools that help us manipulate biology to solve disease, or perform tasks with molecular machines that we have not even imagined yet,” he says. “I want to provide these tools and enable biologists to imagine things that they have not even thought of before.”


Unlocking ammonia as a fuel source for heavy industry

Four MIT alumni say their startup, Amogy, has the technology to help decarbonize maritime shipping, power generation, manufacturing, and more.


At a high level, ammonia seems like a dream fuel: It’s carbon-free, energy-dense, and easier to move and store than hydrogen. Ammonia is also already manufactured and transported at scale, meaning it could transform energy systems using existing infrastructure. But burning ammonia creates dangerous nitrous oxides, and splitting ammonia molecules to create hydrogen fuel typically requires lots of energy and specialized engines.

The startup Amogy, founded by four MIT alumni, believes it has the technology to finally unlock ammonia as a major fuel source. The company has developed a catalyst it says can split — or “crack” — ammonia into hydrogen and nitrogen up to 70 percent more efficiently than state-of-the-art systems today. The company is planning to sell its catalysts as well as modular systems including fuel cells and engines to convert ammonia directly to power. Those systems don’t burn or combust ammonia, and thus bypass the health concerns related to nitrous oxides.

Since Amogy’s founding in 2020, the company has used its ammonia-cracking technology to create the world’s first ammonia-powered drone, tractor, truck, and tugboat. It has also attracted partnerships with industry leaders including Samsung, Saudi Aramco, KBR, and Hyundai, raising more than $300 million along the way.

“No one has showcased that ammonia can be used to power things at the scale of ships and trucks like us,” says CEO Seonghoon Woo PhD ’15, who founded the company with Hyunho Kim PhD ’18, Jongwon Choi PhD ’17, and Young Suk Jo SM ’13, PhD ’16. “We’ve demonstrated this approach works and is scalable.”

Earlier this year, Amogy completed a research and manufacturing facility in Houston and announced a pilot deployment of its catalyst with the global engineering firm JGC Holdings Corporation. Now, with a manufacturing contract secured with Samsung Heavy Industries, Amogy is set to start delivering more of its systems to customers next year. The company will deploy a 1-megawatt ammonia-to-power pilot project with the South Korean city of Pohang in 2026, with plans to scale up to 40 megawatts at that site by 2028 or 2029. Woo says dozens of other projects with multinational corporations are in the works.

Because of the power density advantages of ammonia over renewables and batteries, the company is targeting power-hungry industries like maritime shipping, power generation, construction, and mining for its early systems.

“This is only the beginning,” Woo says. “We’ve worked hard to build the technology and the foundation of our company, but the real value will be generated as we scale. We’ve proved the potential for ammonia to decarbonize heavy industry, and now we really want to accelerate adoption of our technology. We’re thinking long term about the energy transition.”

Unlocking a new fuel source

Woo and Choi completed their PhDs in MIT’s Department of Materials Science and Engineering before their eventual co-founders, Kim and Jo, completed their PhDs in MIT’s Department of Mechanical Engineering. Jo worked on energy science and ran experiments to make engines run more efficiently as part of his PhD.

“The PhD programs at MIT teach you how to think deeply about solving technical problems using systems-based approaches,” Woo says. “You also realize the value in learning from failures, and that mindset of iteration is similar to what you need to do in startups.”

In 2020, Woo was working in the semiconductor industry when he reached out to his eventual co-founders asking if they were working on anything interesting. At that time, Jo was still working on energy systems based on hydrogen and ammonia while Kim was developing new catalysts to create ammonia fuel.

“I wanted to start a company and build a business to do good things for society,” Woo recalls. “People had been talking about hydrogen as a more sustainable fuel source, but it had never come to fruition. We thought there might be a way to improve ammonia catalyst technology and accelerate the hydrogen economy.”

The founders started experimenting with Jo’s technology for ammonia cracking, the process in which ammonia (NH3) molecules split into their nitrogen (N2) and hydrogen (H2) constituent parts. Ammonia cracking to date has been done at huge plants in high-temperature reactors that require large amounts of energy. Those high temperatures limited the catalyst materials that could be used to drive the reaction.

Starting from scratch, the founders were able to identify new material recipes that could be used to miniaturize the catalyst and work at lower temperatures. The proprietary catalyst materials allow the company to create a system that can be deployed in new places at lower costs.

“We really had to redevelop the whole technology, including the catalyst and reformer, and even the integration with the larger system,” Woo says. “One of the most important things is we don’t combust ammonia — we don’t need pilot fuel, and we don’t generate any nitrogen gas or CO2.”

Today Amogy has a portfolio of proprietary catalyst technologies that use base metals along with precious metals. The company has proven the efficiency of its catalysts in demonstrations beginning with the first ammonia-powered drone in 2021. The catalyst can be used to produce hydrogen more efficiently, and by integrating the catalyst with hydrogen fuel cells or engines, Amogy also offers modular ammonia-to-power systems that can scale to meet customer energy demands.

“We’re enabling the decarbonization of heavy industry,” Woo says. “We are targeting transportation, chemical production, manufacturing, and industries that are carbon-heavy and need to decarbonize soon, for example to achieve domestic goals. Our vision in the longer term is to enable ammonia as a fuel in a variety of applications, including power generation, first at microgrids and then eventually full grid-scale.”

Scaling with industry

When Amogy completed its facility in Houston, one of their early visitors was MIT Professor Evelyn Wang, who is also MIT’s vice president for energy and climate. Woo says other people involved in the Climate Project at MIT have been supportive.

Another key partner for Amogy is Samsung Heavy Industries, which announced a multiyear deal to manufacturing Amogy’s ammonia-to-power systems on Nov. 12.

“Our strategy is to partner with the existing big players in heavy industry to accelerate the commercialization of our technology,” Woo says. “We have worked with big oil and gas companies like BHP and Saudi Aramco, companies interested in hydrogen fuel like KBR and Mitsubishi, and many more industrial companies.”

When paired with other clean energy technologies to provide the power for its systems, Woo says Amogy offers a way to completely decarbonize sectors of the economy that can’t electrify on their own.

In heavy transport, you have to use high-energy density liquid fuel because of the long distances and power requirements,” Woo says. “Batteries can’t meet those requirements. It’s why hydrogen is such an exciting molecule for heavy industry and shipping. But hydrogen needs to be kept super cold, whereas ammonia can be liquid at room temperature. Our job now is to provide that power at scale.”


Josh Randolph: Taking care of others as an EMT and ROTC leader

“I always wanted to be in public service, serve my community, and serve my country,” says the MIT mechanical engineering major.


In April, MIT senior Josh Randolph will race 26.2 miles across Concord, Massachusetts, and neighboring towns, carrying a 50-lb backpack. The race, called the Tough Ruck, honors America’s fallen military and first responders. For Randolph, it is one of the most rewarding experiences he’s done in his time at MIT, and he’s never missed a race.

“I want to do things that are challenging and push me to learn more about myself,” says Randolph, a Nebraska native. “As soon as I found out about the Tough Ruck, I knew I was going to be a part of it.”

Carrying on tradition and honoring those before him is a priority for Randolph. Both of his grandfathers served in the United States Air Force, and now he’s following in their footsteps through leadership in the U.S. Air Force Reserve Officers’ Training Corps (AFROTC) at MIT. His work with MIT Emergency Medical Services (EMS) has inspired him to aim for medical school so he could join the Air Force as a doctor.

“I always wanted to be in public service, serve my community, and serve my country,” Randolph says.

Getting attached to medicine

Randolph was particularly close with his grandfather, who worked with electronics in the Air Force and later became an engineer.

“I’ve always seen him as a big role model of mine. He’s very proud of his service,” Randolph says. A mechanical engineering major, he shares his grandfather’s interest in the scientific and technical side of the military.

But Randolph hasn’t let his commitment to the Air Force narrow his experiences at MIT.

He signed up for MIT EMS in his sophomore year as a way to push out of his comfort zone. Although he didn’t have a strong interest in medicine at the time, he was excited about being responsible for providing essential services to his community.

“If somebody’s in need on campus, they call 911, and we’re entrusted with the responsibility to help them out and keep them safe. I didn’t even know that was something you could do in college,” Randolph says.

Getting late-night calls and handling high-pressure situations took some getting used to, but he loved that he was helping.

“It feels a little uncomfortable at first, but then the more calls you run, the more experience you get and the more comfortable you feel with it, and then the more you want to do,” Randolph says.

Since joining in his second year, Randolph has responded to more than 100 911 calls and now holds the rank of provincial crew chief, meaning he provides basic life support patient care and coordinates on-scene operations.

His experiences interacting with patients and racing around Cambridge, Massachusetts, to help his community made him realize he would regret not pursuing medicine. In his final year at MIT, he set his sights on medical school. “Even though it was pretty late, I decided to make that switch and put my all into medicine,” Randolph says.

After serving as class officer during his junior year, helping to oversee the EMT certification process, Randolph became the director of professional development in his senior year. In this role, he oversees the training and development of service members as well as the quality of patient care. “It’s great to see how new students integrate and gain bigger roles and become more involved with the services,” Randolph says. “It’s really rewarding to contribute a little bit toward their development within EMS and then also just as people.”

Leadership in the ROTC

Randolph knew he would be a part of Air Force ROTC since early in high school. He later earned the Air Force ROTC Type 1 scholarship that gave him a tuition-free spot at MIT. It was through AFROTC that he became further committed to helping and honoring those around him, including through the Tough Ruck.

“Pretty often there are family members of fallen servicemembers who make tags with their loved one’s name on it and they hand them out for people to carry on their rucks, which is pretty cool, Randolph said of the race. “Overall, it is a really supportive environment, and I try to give as many people high fives and as much encouragement as I can, but at some point I get too tired and need to focus on running.”

His parents come out to watch every year.

In previous semesters, Randolph has served as flight commander and group commander within AFROTC’s Detachment 365, which is based at MIT and also hosts cadets from Harvard University, Tufts University, and Wellesley College. Currently, as squadron commander, he leads one of the 20-cadet units that makes up the detachment. He has co-organized three Leadership Laboratories dedicated to training over 70 cadets.

Randolph has earned the AFROTC Field Training Superior Performance Award, the AFROTC Commendation Award, the AFROTC Achievement Award, and the Military Order of the World Wars Bronze Award. He has also received the AFROTC Academic Honors Award five times, the Physical Fitness Award four times, and the Maximum AFROTC Physical Fitness Assessment Award two times. 

He keeps his activities and schoolwork straight through to-do lists and calendar items, but he admits the workload can still be tough.

“One thing that has helped me is trying to prioritize and figure out what things need my attention immediately or what things will be very important. If it is something that is important and will affect or benefit a lot of people, I try and devote my energy toward that to make the most of my time and implement meaningful things,” Randolph says.

A human-centered direction

For the last two years, Randolph worked in the Pappalardo Laboratory as an apprentice and undergraduate assistant, helping students design, fabricate, and test robots they were building for a class design challenge. He has also conducted linguistics research with Professor Suzanne Flynn and worked in the labs of professor of nuclear science and engineering Michael Short and professor of biological and mechanical engineering Domatilla Del Vecchio.

Randolph has also volunteered his time through English for Speakers of Other Languages, where he worked as a volunteer to help MIT employees improve their English speaking and writing skills.

For now, he is excited to enter a more human-centered field through his studies in medicine. After watching his father survive two bouts of cancer, thanks in part to robotically assisted surgery, he hopes to develop robotic health care applications.

“I want to have a deeper and more tangible connection to people. Compassion and empathy are things that I really want to try and live by,” Randolph says. “I think being the most empathetic and compassionate with the people you take care of is always a good thing.”


Scientists get a first look at the innermost region of a white dwarf system

X-ray observations reveal surprising features of the dying star’s most energetic environment.


Some 200 light years from Earth, the core of a dead star is circling a larger star in a macabre cosmic dance. The dead star is a type of white dwarf that exerts a powerful magnetic field as it pulls material from the larger star into a swirling, accreting disk. The spiraling pair is what’s known as an “intermediate polar” — a type of star system that gives off a complex pattern of intense radiation, including X-rays, as gas from the larger star falls onto the other one.

Now, MIT astronomers have used an X-ray telescope in space to identify key features in the system’s innermost region — an extremely energetic environment that has been inaccessible to most telescopes until now. In an open-access study published in the Astrophysical Journal, the team reports using NASA’s Imaging X-ray Polarimetry Explorer (IXPE) to observe the intermediate polar, known as EX Hydrae.

The team found a surprisingly high degree of X-ray polarization, which describes the direction of an X-ray wave’s electric field, as well as an unexpected direction of polarization in the X-rays coming from EX Hydrae. From these measurements, the researchers traced the X-rays back to their source in the system’s innermost region, close to the surface of the white dwarf.

What’s more, they determined that the system’s X-rays were emitted from a column of white-hot material that the white dwarf was pulling in from its companion star. They estimate that this column is about 2,000 miles high — about half the radius of the white dwarf itself and much taller than what physicists had predicted for such a system. They also determined that the X-rays are reflected off the white dwarf’s surface before scattering into space — an effect that physicists suspected but hadn’t confirmed until now.

The team’s results demonstrate that X-ray polarimetry can be an effective way to study extreme stellar environments such as the most energetic regions of an accreting white dwarf.

“We showed that X-ray polarimetry can be used to make detailed measurements of the white dwarf's accretion geometry,” says Sean Gunderson, a postdoc in MIT’s Kavli Institute for Astrophysics and Space Research, who is the study’s lead author. “It opens the window into the possibility of making similar measurements of other types of accreting white dwarfs that also have never had predicted X-ray polarization signals.”

 

Gunderson’s MIT Kavli co-authors include graduate student Swati Ravi and research scientists Herman Marshall and David Huenemoerder, along with Dustin Swarm of the University of Iowa, Richard Ignace of East Tennessee State University, Yael Nazé of the University of Liège, and Pragati Pradhan of Embry Riddle Aeronautical University.

A high-energy fountain

All forms of light, including X-rays, are influenced by electric and magnetic fields. Light travels in waves that wiggle, or oscillate, at right angles to the direction in which the light is traveling. External electric and magnetic fields can pull these oscillations in random directions. But when light interacts and bounces off a surface, it can become polarized, meaning that its vibrations tighten up in one direction. Polarized light, then, can be a way for scientists to trace the source of the light and discern some details about the source’s geometry.

The IXPE space observatory is NASA’s first mission designed to study polarized X-rays that are emitted by extreme astrophysical objects. The spacecraft, which launched in 2021, orbits the Earth and records these polarized X-rays. Since launch, it has primarily focused on supernovae, black holes, and neutron stars.

The new MIT study is the first to use IXPE to measure polarized X-rays from an intermediate polar — a smaller system compared to black holes and supernovas, that nevertheless is known to be a strong emitter of X-rays.

“We started talking about how much polarization would be useful to get an idea of what’s happening in these types of systems, which most telescopes see as just a dot in their field of view,” Marshall says.

An intermediate polar gets its name from the strength of the central white dwarf’s magnetic field. When this field is strong, the material from the companion star is directly pulled toward the white dwarf’s magnetic poles. When the field is very weak, the stellar material instead swirls around the dwarf in an accretion disk that eventually deposits matter directly onto the dwarf’s surface.

In the case of an intermediate polar, physicists predict that material should fall in a complex sort of in-between pattern, forming an accretion disk that also gets pulled toward the white dwarf’s poles. The magnetic field should lift the disk of incoming material far upward, like a high-energy fountain, before the stellar debris falls toward the white dwarf’s magnetic poles, at speeds of millions of miles per hour, in what astronomers refer to as an “accretion curtain.” Physicists suspect that this falling material should run up against previously lifted material that is still falling toward the poles, creating a sort of traffic jam of gas. This pile-up of matter forms a column of colliding gas that is tens of millions of degrees Fahrenheit and should emit high-energy X-rays.

An innermost picture

By measuring any polarized X-rays emitted by EX Hydrae, the team aimed to test the picture of intermediate polars that physicists had hypothesized. In January 2025, IXPE took a total of about 600,000 seconds, or about seven days’ worth, of X-ray measurements from the system.

“With every X-ray that comes in from the source, you can measure the polarization direction,” Marshall explains. “You collect a lot of these, and they’re all at different angles and directions which you can average to get a preferred degree and direction of the polarization.”

Their measurements revealed an 8 percent polarization degree that was much higher than what scientists had predicted according to some theoretical models. From there, the researchers were able to confirm that the X-rays were indeed coming from the system’s column, and that this column is about 2,000 miles high.

“If you were able to stand somewhat close to the white dwarf’s pole, you would see a column of gas stretching 2,000 miles into the sky, and then fanning outward,” Gunderson says.

The team also measured the direction of EX Hydrae’s X-ray polarization, which they determined to be perpendicular to the white dwarf’s column of incoming gas. This was a sign that the X-rays emitted by the column were then bouncing off the white dwarf’s surface before traveling into space, and eventually into IXPE’s telescopes.

“The thing that’s helpful about X-ray polarization is that it’s giving you a picture of the innermost, most energetic portion of this entire system,” Ravi says. “When we look through other telescopes, we don’t see any of this detail.”

The team plans to apply X-ray polarization to study other accreting white dwarf systems, which could help scientists get a grasp on much larger cosmic phenomena.

“There comes a point where so much material is falling onto the white dwarf from a companion star that the white dwarf can’t hold it anymore, the whole thing collapses and produces a type of supernova that’s observable throughout the universe, which can be used to figure out the size of the universe,” Marshall offers. “So understanding these white dwarf systems helps scientists understand the sources of those supernovae, and tells you about the ecology of the galaxy.”

This research was supported, in part, by NASA.


New AI agent learns to use CAD to create 3D objects from sketches

The virtual VideoCAD tool could boost designers’ productivity and help train engineers learning computer-aided design.


Computer-Aided Design (CAD) is the go-to method for designing most of today’s physical products. Engineers use CAD to turn 2D sketches into 3D models that they can then test and refine before sending a final version to a production line. But the software is notoriously complicated to learn, with thousands of commands to choose from. To be truly proficient in the software takes a huge amount of time and practice.

MIT engineers are looking to ease CAD’s learning curve with an AI model that uses CAD software much like a human would. Given a 2D sketch of an object, the model quickly creates a 3D version by clicking buttons and file options, similar to how an engineer would use the software.

The MIT team has created a new dataset called VideoCAD, which contains more than 41,000 examples of how 3D models are built in CAD software. By learning from these videos, which illustrate how different shapes and objects are constructed step-by-step, the new AI system can now operate CAD software much like a human user.

With VideoCAD, the team is building toward an AI-enabled “CAD co-pilot.” They envision that such a tool could not only create 3D versions of a design, but also work with a human user to suggest next steps, or automatically carry out build sequences that would otherwise be tedious and time-consuming to manually click through.

“There’s an opportunity for AI to increase engineers’ productivity as well as make CAD more accessible to more people,” says Ghadi Nehme, a graduate student in MIT’s Department of Mechanical Engineering.

“This is significant because it lowers the barrier to entry for design, helping people without years of CAD training to create 3D models more easily and tap into their creativity,” adds Faez Ahmed, associate professor of mechanical engineering at MIT.

Ahmed and Nehme, along with graduate student Brandon Man and postdoc Ferdous Alam, will present their work at the Conference on Neural Information Processing Systems (NeurIPS) in December.

Click by click

The team’s new work expands on recent developments in AI-driven user interface (UI) agents — tools that are trained to use software programs to carry out tasks, such as automatically gathering information online and organizing it in an Excel spreadsheet. Ahmed’s group wondered whether such UI agents could be designed to use CAD, which encompasses many more features and functions, and involves far more complicated tasks than the average UI agent can handle.

In their new work, the team aimed to design an AI-driven UI agent that takes the reins of the CAD program to create a 3D version of a 2D sketch, click by click. To do so, the team first looked to an existing dataset of objects that were designed in CAD by humans. Each object in the dataset includes the sequence of high-level design commands, such as “sketch line,” “circle,” and “extrude,” that were used to build the final object.

However, the team realized that these high-level commands alone were not enough to train an AI agent to actually use CAD software. A real agent must also understand the details behind each action. For instance: Which sketch region should it select? When should it zoom in? And what part of a sketch should it extrude? To bridge this gap, the researchers developed a system to translate high-level commands into user-interface interactions.

“For example, let’s say we drew a sketch by drawing a line from point 1 to point 2,” Nehme says. “We translated those high-level actions to user-interface actions, meaning we say, go from this pixel location, click, and then move to a second pixel location, and click, while having the ‘line’ operation selected.”

In the end, the team generated over 41,000 videos of human-designed CAD objects, each of which is described in real-time in terms of the specific clicks, mouse-drags, and other keyboard actions that the human originally carried out. They then fed all this data into a model they developed to learn connections between UI actions and CAD object generation.

Once trained on this dataset, which they dub VideoCAD, the new AI model could take a 2D sketch as input and directly control the CAD software, clicking, dragging, and selecting tools to construct the full 3D shape. The objects ranged in complexity from simple brackets to more complicated house designs. The team is training the model on more complex shapes and envisions that both the model and the dataset could one day enable CAD co-pilots for designers in a wide range of fields.

“VideoCAD is a valuable first step toward AI assistants that help onboard new users and automate the repetitive modeling work that follows familiar patterns,” says Mehdi Ataei, who was not involved in the study, and is a senior research scientist at Autodesk Research, which develops new design software tools. “This is an early foundation, and I would be excited to see successors that span multiple CAD systems, richer operations like assemblies and constraints, and more realistic, messy human workflows.”


A new take on carbon capture

Mantel, founded by MIT alumni, developed a system that captures CO2 from factories and power plants while delivering steam to customers.


If there was one thing Cameron Halliday SM ’19, MBA ’22, PhD ’22 was exceptional at during the early days of his PhD at MIT, it was producing the same graph over and over again. Unfortunately for Halliday, the graph measured various materials’ ability to absorb CO2 at high temperatures over time — and it always pointed down and to the right. That meant the materials lost their ability to capture the molecules responsible for warming our climate.

At least Halliday wasn’t alone: For many years, researchers have tried and mostly failed to find materials that could reliably absorb CO2 at the super-high temperatures of industrial furnaces, kilns, and boilers. Halliday’s goal was to find something that lasted a little longer.

Then in 2019, he put a type of molten salt called lithium-sodium ortho-borate through his tests. The salts absorbed more than 95 percent of the CO2. And for the first time, the graph showed almost no degradation over 50 cycles.  The same was true after 100 cycles. Then 1,000.

“I honestly don’t know if we ever expected to completely solve the problem,” Halliday says. “We just expected to improve the system. It took another two months to figure out why it worked.”

The researchers discovered the salts behave like a liquid at high temperatures, which avoids the brittle cracking responsible for the degradation of many solid materials.

“I remember walking home over the Mass Ave bridge at 5 a.m. with all the morning runners going by me,” Halliday recalls. “That was the moment when I realized what this meant. Since then, it’s been about proving it works at larger scales. We’ve just been building the next scaled-up version, proving it still works, building a bigger version, proving that out, until we reach the ultimate goal of deploying this everywhere.”

Today, Halliday is the co-founder and CEO of Mantel, a company building systems to capture carbon dioxide at large industrial sites of all types. Although a lot of people think the carbon capture industry is a dead end, Halliday doesn’t give up so easily, and he’s got a growing corpus of performance data to keep him encouraged.

Mantel’s system can be added on to the machines of power stations and factories making cement, steel, paper and pulp, oil and gas, and more, reducing their carbon emissions by around 95 percent. Instead of being released into the atmosphere, the emitted CO2 is channeled into Mantel’s system, where the company’s salts are sprayed out from something that looks like a shower head. The CO2 diffuses through the molten salts in a reaction that can be reversed through further temperature increases, so the salts boil off pure CO2 that can be transported for use or stored underground.

A key difference from other carbon capture methods that have struggled to be profitable is that Mantel uses the heat from its process to generate steam for customers by combining it with water in another part of its system. Mantel says delivering steam, which is used to drive many common industrial processes, lets its system work with just 3 percent of the net energy that state-of-the-art carbon capture systems require.

“We’re still consuming energy, but we get most of it back as steam, whereas the incumbent technology only consumes steam,” says Halliday, who co-founded Mantel with Sean Robertson PhD ’22 and Danielle Rapson. “That steam is a useful revenue stream, so we can turn carbon capture from a waste management process into a value creation process for our customer’s core business — whether that’s a power station using steam to make electricity, or oil and gas refineries. It completely changes the economics of carbon capture.”

From science to startup

Halliday’s first exposure to MIT came in 2016 when he cold emailed Alan Hatton, MIT’s Ralph Landau Professor of Chemical Engineering Practice, asking if he could come to his lab for the summer and work on research into carbon capture.

“He invited me, but he didn’t put me on that project,” Halliday recalls. “At the end of the summer he said, ‘You should consider coming back and doing a PhD.’”

Halliday enrolled in a joint PhD-MBA program the following year.

“I really wanted to work on something that had an impact,” Halliday says. “The dual PhD-MBA program has some deep technical academic elements to it, but you also work with a company for two months, so you use a lot of what you learn in the real world.”

Halliday worked on a few different research projects in Hatton’s lab early on, all three of which eventually turned into companies. The one that he stuck with explored ways to make carbon capture more energy efficient by working at the high temperatures common at emissions-heavy industrial sites.

Halliday ran into the same problems as past researchers with materials degrading at such extreme conditions.

“It was the big limiter for the technology,” Halliday recalls.

Then Halliday ran his successful experiment with molten borate salts in 2019. The MBA portion of his program began soon after, and Halliday decided to use that time to commercialize the technology. Part of that occurred in Course 15.366 (Climate and Energy Ventures), where Halliday met his co-founders. As it happens, alumni of the class have started more than 150 companies over the years. Halliday also received support from the MIT Energy Initiative.

“MIT tries to pull these great ideas out of academia and get them into the world so they can be valued and used,” Halliday says. “For the Climate and Energy Ventures class, outside speakers showed us every stage of company-building. The technology roadmap for our system is shoebox-sized, shipping container, one-bedroom house, and then the size of a building. It was really valuable to see other companies and say, ‘That’s what we could look like in three years, or six years.”

From startup to scale up

When Mantel was officially founded in 2022 the founders had their shoebox-sized system. After raising early funding, the team built its shipping container-sized system at The Engine, an MIT-affiliated startup incubator. That system has been operational for almost two years.

Last year, Mantel announced a partnership with Kruger Inc. to build the next version of its system at a factory in Quebec, which will be operational next year. The plant will run in a two-year test phase before scaling across Kruger’s other plants if successful.

“The Quebec project is proving the capture efficiency and proving the step-change improvement in energy use of our system,” Halliday says. “It’s a derisking of the technology that will unlock a lot more opportunities.”

Halliday says Mantel is in conversations with close to 100 industrial partners around the world, including the owners of refineries, data centers, cement and steel plants, and oil and gas companies. Because it’s a standalone addition, Halliday says Mantel’s system doesn’t have to change much to be used in different industries.

Mantel doesn’t handle CO2 conversion or sequestration, but Halliday says capture makes up the bulk of the costs in the CO2 value chain. It also generates high-quality CO2 that can be transported in pipelines and used in industries including the food and beverage industry — like the CO2 that makes your soda bubbly.

“This is the solution our customers are dreaming of,” Halliday says. “It means they don’t have to shut down their billion-dollar asset and reimagine their business to address an issue that they all appreciate is existential. There are questions about the timeline, but most industries recognize this is a problem they’ll have to grapple with eventually. This is a pragmatic solution that’s not trying to reshape the world as we dream of it. It’s looking at the problem at hand today and fixing it.”


MIT researchers use CT scans to unravel mysteries of early metal production

The team adapted the medical technique to study slag waste that was a byproduct of ancient copper smelting.


Around 5,000 years ago, people living in what is now Iran began extracting copper from rock by processing ore, an activity known as smelting. This monumental shift gave them a powerful new technology and may have marked the birth of metallurgy. Soon after, people in different parts of the world were using copper and bronzes (alloys of copper and tin, or copper and arsenic) to produce decorative objects, weapons, tools, and more.

Studying how humans produced such objects is challenging because little evidence still exists, and artifacts that have survived are carefully guarded and preserved.

In a paper published in PLOS One, MIT researchers demonstrated a new approach to uncovering details of some of the earliest metallurgical processes. They studied 5,000-year-old slag waste, a byproduct of smelting ore, using techniques including X-ray computed tomography, also known as CT scanning. In their paper, they show how this noninvasive imaging technique, which has primarily been used in the medical field, can reveal fine details about structures within the pieces of ancient slag.

“Even though slag might not give us the complete picture, it tells stories of how past civilizations were able to refine raw materials from ore and then to metal,” says postdoc Benjamin Sabatini. “It speaks to their technological ability at that time, and it gives us a lot of information. The goal is to understand, from start to finish, how they accomplished making these shiny metal products.”

In the paper, Sabatini and senior author Antoine Allanore, a professor of metallurgy and the Heather N. Lechtman Professor of Materials Science and Engineering, combined CT scanning with more traditional methods of studying ancient artifacts, including cutting the samples for further analysis. They demonstrated that CT scanning could be used to complement those techniques, revealing pores and droplets of different materials within samples. This information could shed light on the materials used by and the technological sophistication of some of the first metallurgists on Earth.

“The Early Bronze Age is one of the earliest reported interactions between mankind and metals,” says Allanore, who is also director of MIT’s Center for Materials Research in Archaeology and Ethnology. “Artifacts in that region at that period are extremely important in archaeology, yet the materials themselves are not very well-characterized in terms of our understanding of the underlying materials and chemical processes. The CT scan approach is a transformation of traditional archaeological methods of determining how to make cuts and analyze samples.”

A new tool in archaeology

Slag is produced as a molten hot liquid when ores are heated to produce metal. The slag contains other constituent minerals from the ore, as well as unreacted metals, which are commonly mixed with additives like limestone. In the mixture, the slag is less dense than the metal, so it can rise and be removed, solidifying like lava as it cools.

“Slag waste is chemically complex to interpret because in our modern metallurgical practices it contains everything not desired in the final product — in particular, arsenic, which is a key element in the original minerals for copper,” says Allanore. “There’s always been a question in archaeometallurgy if we can use arsenic and similar elements in these remains to learn something about the metal production process. The challenge here is that these minerals, especially arsenic, are very prone to dissolution and leaching, and therefore their environmental stability creates additional problems in terms of interpreting what this object was when it was being made 6,000 years ago.”

For the study, the researchers used slag from an ancient site known as Tepe Hissar in Iran. The slag has previously been dated to the period between 3100 and 2900 BCE and was loaned by the Penn Museum to Allanore for study in 2022.

“This region is often brought up as one of the earliest places where evidence of copper processing and object production might have happened,” Allanore explains. “It is very well-preserved, and it’s an early example of a site with long-distance trade and highly organized society. That’s why it’s so important in metallurgy.”

The researchers believe this is the first attempt to study ancient slag using CT scanning, partly because medical-grade scanners are expensive and primarily located in hospitals. The researchers overcame these challenges by working with a local startup in Cambridge that makes industrial CT scanners. They also used the CT scanner on MIT’s campus.

“It was really out of curiosity to see if there was a better way to study these objects,” Sabatini said.

In addition to the CT scans, the researchers used more conventional archaeological analytical methods such as X-ray fluorescence, X-ray diffraction, and optical and scanning electron microscopy. The CT scans provided a detailed overall picture of the internal structure of the slag and the location of interesting features like pores and bits of different materials, augmenting the conventional techniques to impart more complete information about the inside of samples.

They used that information to decide where to section their sample, noting that researchers often guess where to section samples, unsure even which side of the sample was originally facing up or down.

“My strategy was to zero in on the high-density metal droplets that looked like they were still intact, since those might be most representative of the original process,” Sabatini says. “Then I could destructively analyze the samples with a single slice. The CT scanning shows you exactly what is most interesting, as well as the general layout of things you need to study.”

Finding stories in slag

In previous studies, some slag samples from the Tepe Hissar site contained copper and thus seemed to fit the narrative that they resulted from the production of copper, while others showed no evidence of copper at all.

The researchers found that CT scanning allowed them to characterize the intact droplets that contained copper. It also allowed them to identify where gases evolved, forming voids that hold information about how the slags were produced.

Other slags at the site had previously been found to contain small metallic arsenide compounds, leading to disagreements about the role of arsenic in early metal production. The MIT researchers found that arsenic existed in different phases across their samples and could move within the slag or even escape the slag entirely, making it complicated to infer metallurgical processes from the study of arsenic alone.

Moving forward, the researchers say CT scanning could be a powerful tool in archaeology to unravel complex ancient materials and processes.

“This should be an important lever for more systematic studies of the copper aspect of smelting, and also for continuing to understand the role of arsenic,” Allanore says. “It allows us to be cognizant of the role of corrosion and the long-term stability of the artifacts to continue to learn more. It will be a key support for people who want to investigate these questions.”

This work was supported, in part, by the MIT Human Insight Collaborative (MITHIC). The X-ray CT system is supported by MIT's Center for Advanced Production Technologies.


Ultrasonic device dramatically speeds harvesting of water from the air

The system can be paired with any atmospheric water harvesting material to shake out drinking water in minutes instead of hours.


Feeling thirsty? Why not tap into the air? Even in desert conditions, there exists some level of humidity that, with the right material, can be soaked up and squeezed out to produce clean drinking water. In recent years, scientists have developed a host of promising sponge-like materials for this “atmospheric water harvesting.”

But recovering the water from these materials usually requires heat — and time. Existing designs rely on heat from the sun to evaporate water from the materials and condense it into droplets. But this step can take hours or even days. 

Now, MIT engineers have come up with a way to quickly recover water from an atmospheric water harvesting material. Rather than wait for the sun to evaporate water out, the team uses ultrasonic waves to shake the water out.

The researchers have developed an ultrasonic device that vibrates at high frequency. When a water-harvesting material, known as a “sorbent,” is placed on the device, the device emits ultrasound waves that are tuned to shake water molecules out of the sorbent. The team found that the device recovers water in minutes, versus the tens of minutes or hours required by thermal designs.

Unlike heat-based designs, the device does require a power source. The team envisions that the device could be powered by a small solar cell, which could also act as a sensor to detect when the sorbent is full. It could also be programmed to automatically turn on whenever a material has harvested enough moisture to be extracted. In this way, a system could soak up and shake out water from the air over many cycles in a single day.

“People have been looking for ways to harvest water from the atmosphere, which could be a big source of water particularly for desert regions and places where there is not even saltwater to desalinate,” says Svetlana Boriskina, principal research scientist in MIT’s Department of Mechanical Engineering. “Now we have a way to recover water quickly and efficiently.”

Boriskina and her colleagues report on their new device in a study appearing today in the journal Nature Communications. The study’s first author is Ikra Iftekhar Shuvo, an MIT graduate student in media arts and sciences, along with Carlos Díaz-Marín, Marvin Christen, Michael Lherbette, and Christopher Liem.

Precious hours

Boriskina’s group at MIT develops materials that interact with the environment in novel ways. Recently, her group explored atmospheric water harvesting (AWH), and ways that materials can be designed to efficiently absorb water from the air. The hope is that, if they can work reliably, AWH systems would be of most benefit to communities where traditional sources of drinking water — and even saltwater — are scarce.

Like other groups, Boriskina’s lab had generally assumed that an AWH system in the field would absorb moisture during the night, and then use the heat from the sun during the day to naturally evaporate the water and condense it for collection.

“Any material that’s very good at capturing water doesn’t want to part with that water,” Boriskina explains. “So you need to put a lot of energy and precious hours into pulling water out of the material.”

She realized there could be a faster way to recover water after Ikra Shuvo joined her group. Shuvo had been working with ultrasound for wearable medical device applications. When he and Boriskina considered ideas for new projects, they realized that ultrasound could be a way to speed up the recovery step in atmospheric water harvesting.

“It clicked: We have this big problem we’re trying to solve, and now Ikra seemed to have a tool that can be used to solve this problem,” Boriskina recalls.

Water dance

Ultrasound, or ultrasonic waves, are acoustic pressure waves that travel at frequencies of over 20 kilohertz (20,000 cycles per second). Such high-frequency waves are not visible or audible to humans. And, as the team found, ultrasound vibrates at just the right frequency to shake water out of a material.

“With ultrasound, we can precisely break the weak bonds between water molecules and the sites where they’re sitting,” Shuvo says. “It’s like the water is dancing with the waves, and this targeted disturbance creates momentum that releases the water molecules, and we can see them shake out in droplets.”

Shuvo and Boriskina designed a new ultrasonic actuator to recover water from an atmospheric water harvesting material. The heart of the device is a flat ceramic ring that vibrates when voltage is applied. This ring is surrounded by an outer ring that is studded with tiny nozzles. Water droplets that shake out of a material can drop through the nozzle and into collection vessels attached above and below the vibrating ring.

They tested the device on a previously designed atmospheric water harvesting material. Using quarter-sized samples of the material, the team first placed each sample in a humidity chamber, set to various humidity levels. Over time, the samples absorbed moisture and became saturated. The researchers then placed each sample on the ultrasonic actuator and powered it on to vibrate at ultrasonic frequencies. In all cases, the device was able to shake out enough water to dry out each sample in just a few minutes.

The researchers calculate that, compared to using heat from the sun, the ultrasonic design is 45 times more efficient at extracting water from the same material.

“The beauty of this device is that it’s completely complementary and can be an add-on to almost any sorbent material,” says Boriskina, who envisions a practical, household system might consist of a fast-absorbing material and an ultrasonic actuator, each about the size of a window. Once the material is saturated, the actuator would briefly turn on, powered by a solar cell, to shake out the water. The material would then be ready to harvest more water, in multiple cycles throughout a single day.

“It’s all about how much water you can extract per day,” she says. “With ultrasound, we can recover water quickly, and cycle again and again. That can add up to a lot per day.”

This work was supported, in part, by the MIT Abdul Latif Jameel Water and Food Systems Lab and the MIT-Israel Zuckerman STEM Fund.

This work was carried out in part by using MIT.nano and ISN facilities at MIT.


Bigger datasets aren’t always better

MIT researchers developed a way to identify the smallest dataset that guarantees optimal solutions to complex problems.


Determining the least expensive path for a new subway line underneath a metropolis like New York City is a colossal planning challenge — involving thousands of potential routes through hundreds of city blocks, each with uncertain construction costs. Conventional wisdom suggests extensive field studies across many locations would be needed to determine the costs associated with digging below certain city blocks.

Because these studies are costly to conduct, a city planner would want to perform as few as possible while still gathering the most useful data for making an optimal decision.

With almost countless possibilities, how would they know where to start?

A new algorithmic method developed by MIT researchers could help. Their mathematical framework provably identifies the smallest dataset that guarantees finding the optimal solution to a problem, often requiring fewer measurements than traditional approaches suggest.

In the case of the subway route, this method considers the structure of the problem (the network of city blocks, construction constraints, and budget limits) and the uncertainty surrounding costs. The algorithm then identifies the minimum set of locations where field studies would guarantee finding the least expensive route. The method also identifies how to use this strategically collected data to find the optimal decision.

This framework applies to a broad class of structured decision-making problems under uncertainty, such as supply chain management or electricity network optimization.

“Data are one of the most important aspects of the AI economy. Models are trained on more and more data, consuming enormous computational resources. But most real-world problems have structure that can be exploited. We’ve shown that with careful selection, you can guarantee optimal solutions with a small dataset, and we provide a method to identify exactly which data you need,” says Asu Ozdaglar, Mathworks Professor and head of the MIT Department of Electrical Engineering and Computer Science (EECS), deputy dean of the MIT Schwarzman College of Computing, and a principal investigator in the Laboratory for Information and Decision Systems (LIDS).

Ozdaglar, co-senior author of a paper on this research, is joined by co-lead authors Omar Bennouna, an EECS graduate student, and his brother Amine Bennouna, a former MIT postdoc who is now an assistant professor at Northwestern University; and co-senior author Saurabh Amin, co-director of Operations Research Center, a professor in the MIT Department of Civil and Environmental Engineering, and a principal investigator in LIDS. The research will be presented at the Conference on Neural Information Processing Systems.

An optimality guarantee

Much of the recent work in operations research focuses on how to best use data to make decisions, but this assumes these data already exist.

The MIT researchers started by asking a different question — what are the minimum data needed to optimally solve a problem? With this knowledge, one could collect far fewer data to find the best solution, spending less time, money, and energy conducting experiments and training AI models.

The researchers first developed a precise geometric and mathematical characterization of what it means for a dataset to be sufficient. Every possible set of costs (travel times, construction expenses, energy prices) makes some particular decision optimal. These “optimality regions” partition the decision space. A dataset is sufficient if it can determine which region contains the true cost.

This characterization offers the foundation of the practical algorithm they developed that identifies datasets that guarantee finding the optimal solution.

Their theoretical exploration revealed that a small, carefully selected dataset is often all one needs.

“When we say a dataset is sufficient, we mean that it contains exactly the information needed to solve the problem. You don’t need to estimate all the parameters accurately; you just need data that can discriminate between competing optimal solutions,” says Amine Bennouna.

Building on these mathematical foundations, the researchers developed an algorithm that finds the smallest sufficient dataset.

Capturing the right data

To use this tool, one inputs the structure of the task, such as the objective and constraints, along with the information they know about the problem.

For instance, in supply chain management, the task might be to reduce operational costs across a network of dozens of potential routes. The company may already know that some shipment routes are especially costly, but lack complete information on others.

The researchers’ iterative algorithm works by repeatedly asking, “Is there any scenario that would change the optimal decision in a way my current data can't detect?” If yes, it adds a measurement that captures that difference. If no, the dataset is provably sufficient.

This algorithm pinpoints the subset of locations that need to be explored to guarantee finding the minimum-cost solution.

Then, after collecting those data, the user can feed them to another algorithm the researchers developed which finds that optimal solution. In this case, that would be the shipment routes to include in a cost-optimal supply chain.

“The algorithm guarantees that, for whatever scenario could occur within your uncertainty, you’ll identify the best decision,” Omar Bennouna says.

The researchers’ evaluations revealed that, using this method, it is possible to guarantee an optimal decision with a much smaller dataset than would typically be collected.

“We challenge this misconception that small data means approximate solutions. These are exact sufficiency results with mathematical proofs. We’ve identified when you’re guaranteed to get the optimal solution with very little data — not probably, but with certainty,” Amin says.

In the future, the researchers want to extend their framework to other types of problems and more complex situations. They also want to study how noisy observations could affect dataset optimality.

“I was impressed by the work’s originality, clarity, and elegant geometric characterization. Their framework offers a fresh optimization perspective on data efficiency in decision-making,” says Yao Xie, the Coca-Cola Foundation Chair and Professor at Georgia Tech, who was not involved with this work.


Four from MIT named 2026 Rhodes Scholars

Vivian Chinoda ’25, Alice Hall, Sofia Lara, and Sophia Wang ’24 will begin postgraduate studies at Oxford University next fall.


Vivian Chinoda ’25, Alice Hall, Sofia Lara, and Sophia Wang ’24 have been selected as 2026 Rhodes Scholars and will begin fully funded postgraduate studies at the University of Oxford in the U.K. next fall. Hall, Lara, and Wang, are U.S. Rhodes Scholars; Chinoda was awarded the Rhodes Zimbabwe Scholarship.

The scholars were supported by Associate Dean Kim Benard and the Distinguished Fellowships team in Career Advising and Professional Development. They received additional mentorship and guidance from the Presidential Committee on Distinguished Fellowships.

“MIT students never cease to amaze us with their creativity, vision, and dedication,” says Professor Taylor Perron, who co-chairs the committee along with Professor Nancy Kanwisher. “This is especially true of this year’s Rhodes scholars. It’s remarkable how they are simultaneously so talented in their respective fields and so adept at communicating their goals to the world. I look forward to seeing how these outstanding young leaders shape the future. It’s an honor to work with such talented students.”

Vivian Chinoda ’25

Vivian Chinoda, from Harare, Zimbabwe, was named a Rhodes Zimbabwe Scholar on Oct. 10. Chinoda graduated this spring with a BS in business analytics. At Oxford, she hopes to pursue the MSc in social data science and a master’s degree in public policy.  Chinoda aims to foster economic development and equitable resource access for Zimbabwean communities by promoting social innovation and evidence-based policy.

At MIT, Chinoda researched the impacts of the EU’s General Data Protection Regulation on stakeholders and key indicators, such as innovation, with the Institute for Data, Systems, and Society. She supported the Digital Humanities Lab and MIT Ukraine in building a platform to connect and fundraise for exiled Ukrainian scientists. With the MIT Office of Sustainability, Chinoda co-led the plan for a campus transition to a fully electric vehicle fleet, advancing the Institute’s Climate Action Plan.

Chinoda’s professional experience includes roles as a data science and research intern at Adaviv (a controlled-environment agriculture startup) and a product manager at Red Hat, developing AI tools for open-source developers.

Beyond academics, Chinoda served as first-year outreach chair and vice president of the African Students’ Association, where she co-founded the Impact Fund, raising over $30,000 to help members launch social impact initiatives in their countries. She was a scholar in the Social and Ethical Responsibilities of Computing (SERC) program, studying big-data ethics across sectors like criminal justice and health care, and a PKG social impact internship participant. Chinoda also enjoys fashion design, which she channeled into reviving the MIT Black Theatre Guild, earning her the 2025 Laya and Jerome B. Wiesner Student Art Award.

Alice Hall

Alice Hall is a senior from Philadelphia studying chemical engineering with a minor in Spanish. At Oxford, she will earn a DPhil in engineering, focusing on scaling sustainable heating and cooling technologies. She is passionate about bridging technology, leadership, and community to address the climate crisis.

Hall’s research journey began in the Lienhard Group, developing computational and techno-economic models of electrodialysis for nutrient reclamation from brackish groundwater. She then worked in the Langer Lab, investigating alveolar-capillary barrier function to enhance lung viability for transplantation. During a summer in Madrid, she collaborated with the European Space Agency to optimize surface treatments for satellite materials.

Hall’s current research in the Olivetti Group, as part of the MIT Climate Project, examines the manufacturing scalability of early-stage clean energy solutions. Hall has gained industry experience through internships with Johnson and Johnson and Procter and Gamble.

Hall represents the student body as president of MIT’s Undergraduate Association. She also serves on the Presidential Advisory Cabinet, the executive boards of the Chemical Engineering Undergraduate Student Advisory Board and MIT’s chapter of the American Institute of Chemical Engineers, the Corporation Joint Advisory Committee, the Compton Lectures Advisory Committee, and the MIT Alumni Association Board of Directors as an invited guest.

She is an active member of the Gordon-MIT Engineering Leadership Program, the Black Students’ Union, and the National Society of Black Engineers. As a member of the varsity basketball team, she earned both NEWMAC and D3hoops.com Region 2 Rookie of the Year honors in 2023.

Sofia Lara

Hailing from Los Angeles, Sofia Lara is a senior majoring in biological engineering with a minor in Spanish. As a Rhodes Scholar at Oxford, she will pursue a DPhil in clinical medicine, leveraging UK biobank data to develop sex-stratified dosing protocols and safety guidelines for the NHS.

Lara aspires to transform biological complexity from medicine’s blind spots into a therapeutic superpower where variability reveals hidden possibilities and precision medicine becomes truly precise.

At the Broad Institute of MIT and Harvard, Lara investigates the cGAS-STING immune pathway in cancer. Her thesis, a comprehensive genome-wide association study illuminating the role of STING variation in disease pathology, aims to expand understanding of STING-linked immune disorders.

Lara co-founded the MIT-Harvard Future of Biology Conference, convening multidisciplinary researchers to interrogate vulnerabilities in cancer biology. As president of MIT Baker House, she steered community initiatives and executed the legendary Piano Drop, mobilizing hundreds of students in an enduring ritual of collective resilience. Lara captains the MIT Archery Team, serves as music director for MIT Catholic Community, and channels empathy through hand-stitched crocheted octopuses for pediatric patients at the Massachusetts General Hospital.

Sophia Wang ’24

Sophia Wang, from Woodbridge, Connecticut, graduated with a BS in aerospace engineering and a concentration in the design of highly autonomous systems. At Oxford, she will pursue an MSc in mathematical and theoretical physics, followed by an MSc in global governance and diplomacy.

As an undergraduate, Wang conducted research with the MIT Space Telecommunications Astronomy Radiation (STAR) Lab and the MIT Media Lab’s Tangible Media Group and Center for Bits and Atoms. She also interned at the NASA Jet Propulsion Laboratory, working on engineering projects for exoplanet detection missions, the Mars Sample Return mission, and terrestrial proofs-of-concept for self-assembly in space.

Since graduating from MIT, Wang has been engaged in a number of projects. In Bhutan, she contributes to national technology policy centered on mindful development. In Japan, she is a founding researcher at the Henkaku Center, where she is creating an international network of academic institutions. As a venture capitalist, she recently worked with commercial space stations on the effort to replace the International Space Station, which will decommission in 2030. Wang’s creative prototyping tools, such as a modular electromechanical construction kit, are used worldwide through the Fab Foundation, a network of 2,500+ community digital fabrication labs.

An avid cook, Wang created with friends Mince, a pop-up restaurant that serves fine-dining meals to MIT students. Through MIT Global Teaching Labs, Wang taught STEM courses in Kazakhstan and Germany, and she taught digital fabrication and 3D printing workshops across the U.S. as a teacher and cyclist with MIT Spokes. 


MIT Haystack scientists study recent geospace storms and resulting light shows

Solar maximum occurred within the past year — good news for aurora watchers, as the most active period for displays at New England latitudes occurs in the three years following solar maximum.


The northern lights, or aurora borealis, one of nature's most spectacular visual shows, can be elusive. Conventional wisdom says that to see them, we need to travel to northern Canada or Alaska. However, in the past two years, New Englanders have been seeing these colorful atmospheric displays on a few occasions — including this week — from the comfort of their backyards, as auroras have been visible in central and southern New England and beyond. These unusual auroral events have been driven by increased space weather activity, a phenomenon studied by a team of MIT Haystack Observatory scientists.

Auroral events are generated when particles in space are energized by complicated processes in the near-Earth environment, following which they interact with gases high up in the atmosphere. Space weather events such as coronal mass ejections, in which large amounts of material are ejected from our sun, along with geomagnetic storms, greatly increase energy input into those space regions near Earth. These inputs then trigger other processes that cause an increase in energetic particles entering our atmosphere. 

The result is variable colorful lights when the newly energized particles crash into atoms and molecules high above Earth's surface. Recent significant geomagnetic storm events have triggered these auroral displays at latitudes lower than normal — including sightings across New England and other locations across North America.

New England has been enjoying more of these spectacular light shows, such as this week's displays and those during the intense geomagnetic solar storms in May and October 2024, because of increased space weather activity.

Research has determined that auroral displays occur when selected atoms and molecules high in the upper atmosphere are excited by incoming charged particles, which are boosted in energy by intense solar activity. The most common auroral display colors are pink/red and green, with colors varying according to the altitude at which these reactions occur. Red auroras come from lower-energy particles exciting neutral oxygen and cause emissions at altitudes above 150 miles. Green auroras come from higher-energy particles exciting neutral oxygen and cause emissions at altitudes below 150 miles. Rare purple and blue aurora come from excited molecular nitrogen ions and occur during the most intense events.

Scientists measure the magnitude of geomagnetic activity driving auroras in several different ways. One of these uses sensitive magnetic field-measuring equipment at stations around the planet to obtain a geomagnetic storm measurement known as Kp, on a scale from 1 (least activity) to 9 (greatest activity), in three-hour intervals. Higher Kp values indicate the possibility — not a guarantee — of greater auroral sightings as the location of auroral displays move to lower latitudes. Typically, when the Kp index reaches a range of 6 or higher, this indicates that aurora viewings are more likely outside the usual northern ranges. The geomagnetic storm events of this week reached a Kp value of 9, indicating very strong activity in the sun–Earth system.

At MIT Haystack Observatory in Westford, Massachusetts, geospace and atmospheric physics scientists study the atmosphere and its aurora year-round by combining observations from many different instruments. These include ground-based sensors — including large upper-atmosphere radars that bounce signals off particles in the ionosphere — as well as data from space satellites. These tools provide key information, such as density, temperature, and velocity, on conditions and disturbances in the upper atmosphere: basic information that helps researchers at MIT and elsewhere understand the weather in space. 

Haystack geospace research is primarily funded through science funding by U.S. federal agencies such as the National Science Foundation (NSF) and NASA. This work is crucial for our increasingly spacefaring civilization, which requires continual expansion of our understanding of how space weather affects life on Earth, including vital navigation systems such as GPS, worldwide communication infrastructure, and the safety of our power grids. Research in this area is especially important in modern times, as humans increasingly use low Earth orbit for commercial satellite constellations and other systems, and as civilization further progresses into space.

Studies of the variations in our atmosphere and its charged component, known as the ionosphere, have revealed the strong influence of the sun. Beyond the normal white light that we experience each day, the sun also emits many other wavelengths of light, from infrared to extreme ultraviolet. Of particular interest are the extreme ultraviolet portions of solar output, which have enough energy to ionize atoms in the upper atmosphere. Unlike its white light component, the sun's output at these very short wavelengths has many different short- and long-term variations, but the most well known is the approximately 11-year solar cycle, in which the sun goes from minimum to maximum output. 

Scientists have determined that the most recent peak in activity, known as solar maximum, occurred within the past 12 months. This is good news for auroral watchers, as the most active period for severe geomagnetic storms that drive auroral displays at New England latitudes occurs during the three-year period following solar maximum.

Despite intensive research to date, we still have a great deal more to learn about space weather and its effects on the near-Earth environment. MIT Haystack Observatory continues to advance knowledge in this area. 

Larisa Goncharenko, lead geospace scientist and assistant director at Haystack, states, "In general, understanding space weather well enough to forecast it is considerably more challenging than even normal weather forecasting near the ground, due to the vast distances involved in space weather forces. Another important factor comes from the combined variation of Earth's neutral atmosphere, affected by gravity and pressure, and from the charged particle portion of the atmosphere, created by solar radiation and additionally influenced by the geometry of our planet's magnetic field. The complex interplay between these elements provides rich complexity and a sustained, truly exciting scientific opportunity to improve our understanding of basic physics in this vital part of our home in the solar system, for the benefit of civilization."

For up-to-date space weather forecasts and predictions of possible aurora events, visit SpaceWeather.com or NOAA's Aurora Viewline site.


MIT startup aims to expand America’s lithium production

Lithios, founded by Mo Alkhadra PhD ’22 and Professor Martin Bazant, is scaling up an electrochemical lithium extraction technology to secure supply chains of the critical metal.


China dominates the global supply of lithium. The country processes about 65 percent of the battery material and has begun on-again, off-again export restrictions of lithium-based products critical to the economy.

Fortunately, the U.S. has significant lithium reserves, most notably in the form of massive underground brines across south Arkansas and east Texas. But recovering that lithium through conventional techniques would be an energy-intensive and environmentally damaging proposition — if it were profitable at all.

Now, the startup Lithios, founded by Mo Alkhadra PhD ’22 and Martin Z. Bazant, the Chevron Chair Professor of Chemical Engineering, is commercializing a new process of lithium recovery it calls Advanced Lithium Extraction. The company uses electricity to drive a reaction with electrode materials that capture lithium from salty brine water, leaving behind other impurities.

Lithios says its process is more selective and efficient than other direct lithium-extraction techniques being developed. It also represents a far cleaner and less energy-intensive alternative to mining and the solar evaporative ponds that are used to extract lithium from underground brines in the high deserts of South America.

Lithios has been continuously running a pilot system extracting lithium from brine waters from around the world since June. It also recently shipped an early version of its system to a commercial partner scaling up operations in Arkansas.

With the core technology of its modular systems largely validated, next year Lithios plans to begin operating a larger version capable of producing 10 to 100 tons of lithium carbonate per year. From there, the company plans to build a commercial facility that will be able to produce 25,000 tons of lithium carbonate each year. That would represent a massive increase in the total lithium production of the U.S., which is currently limited to less than 5,000 tons per year.

“There’s been a big push recently, and especially in the last year, to secure domestic supplies of lithium and break away from the Chinese chokehold on the critical mineral supply chain,” Alkhadra says. “We have an abundance of lithium deposits at our disposal in the U.S., but we lack the tools to turn those resources into value.”

Adapting a technology

Bazant realized the need for new approaches to mining lithium while working with battery companies through his lab in MIT’s Department of Chemical Engineering. His group has studied battery materials and electrochemical separation for decades.

As part of his PhD in Bazant’s lab, Alkhadra studied electrochemical processes for separation of dissolved metals, with a focus on removing lead from drinking water and treating industrial wastewater. As Alkhadra got closer to graduation, he and Bazant looked at the most promising commercial applications for his work.

It was 2021, and lithium prices were in the midst of a historic spike driven by the metal’s importance in batteries.

Today, lithium comes primarily from mining or through a slow evaporative process that uses miles of surface ponds to refine and recover lithium from wastewater. Both are energy-intensive and damaging to the environment. They are also dominated by Chinese companies and supply chains.

“A lot of hard rock mining is done in Australia, but most of the rock is shipped as a concentrate to China for refining because they’re the ones who have the technology,” Bazant explains.

Other direct lithium-extraction methods use chemicals and filters, but the founders say those methods struggle to be profitable with U.S. lithium reserves, which have low concentrations of lithium and high levels of impurities.

“Those methods work when you have a good grade of lithium brine, but they become increasingly uneconomical as you get lower-quality resources, which is exactly what the industry is going through right now,” Alkhadra says. “The evaporative process has a huge footprint — we’re talking about the size of Manhattan island for a single project. Conveniently, recovering minerals from those low concentrations was the essence of my PhD work at MIT. We simply had to adapt the technology to the new use case.”

While conducting early talks with potential customers, Alkhadra received guidance from MIT’s Venture Mentoring Service, the MIT Sandbox Innovation Fund, and the Massachusetts Clean Energy Center. Lithios officially formed when he completed his PhD in 2022 and received the Activate Fellowship. Lithios grew at The Engine, an MIT startup incubator, before moving to their pilot and manufacturing facility in Medford, Massachusetts, in 2024.

Today, Lithios uses an undisclosed electrode material that attaches to lithium when exposed to precise voltages.

“Think of a big battery with water flowing into the system,” Alkhadra explains. “When the brine comes into contact with our electrodes, it selectively pulls lithium while rejecting all the other contaminants. When the lithium has been loaded onto our capture materials, we can simply change the direction of the electrical current to release the lithium back into a clean water stream. It’s similar to charging and discharging a battery.”

Bazant says the company’s lithium-absorbing materials are an ideal fit for this application.

“One of the main challenges of using battery electrodes to extract lithium is how to complete the system,” Bazant says. “We have a great lithium-extraction material that is very stable in water and has wonderful performance. We also learned how to formulate both electrodes with controlled ion transport and mixing to make the process much more efficient and low cost.”

Growing in the ‘MIT spirit’

The U.S. Geological Survey last year showed the underground Smackover Formation contains between 5 and 19 million tons of lithium in southwest Arkansas alone.

“If you just estimate how much lithium is in that region based on today’s prices, it’s about $2 trillion worth of lithium that can’t be accessed,” Bazant says. “If you could extract these resources efficiently, it would make a huge impact.”

Earlier this year, Lithios shipped its pilot system to a commercial partner in Arkansas to further validate its approach in the region. Lithios also plans to deploy several additional pilot and demonstration projects with other major partners in the oil and gas and mining industries in the coming years.

“After this field deployment, Lithios will quickly scale toward a commercial demonstration plant that will be operational by 2027, with the intent to scale to a kiloton-per-year commercial facility before the end of the decade,” Alkhadra says.

Although Lithios is currently focused on lithium, Bazant says the company’s approach could also be adopted to materials such as rare earth elements and transition metals further down the line.

“We’re developing a unique technology that could make the U.S. the center of the world for critical minerals separation, and we couldn’t have done this anywhere else,” Bazant says. “MIT was the perfect environment, mainly because of the people. There are so many fantastic scientists and businesspeople in the MIT ecosystem who are very technically savvy and ready to jump into a project like this. Our first employees were all MIT people, and they really brought the MIT spirit to our company.”


How drones are altering contemporary warfare

A new book by scholar and military officer Erik Lin-Greenberg examines the evolving dynamics of military and state action centered around drones.


In recent months, Russia has frequently flown drones into NATO territory, where NATO countries typically try to shoot them down. By contrast, when three Russian fighter jets made an incursion into Estonian airspace in September, they were intercepted and no attempt was made to shoot them down — although the incident did make headlines and led to a Russian diplomat being expelled from Estonia.

Those incidents follow a global pattern of recent years. Drone operations, to this point, seem to provoke different responses compared to other kinds of military action, especially the use of piloted warplanes. Drone warfare is expanding but not necessarily provoking major military responses, either by the countries being attacked or by the aggressor countries that have drones shot down.

“There was a conventional wisdom that drones were a slippery slope that would enable leaders to use force in all kinds of situations, with a massively destabilizing effect,” says MIT political scientist Erik Lin-Greenberg. “People thought if drones were used all over the place, this would lead to more escalation. But in many cases where drones are being used, we don’t see that escalation.”

On the other hand, drones have made military action more pervasive. It is at least possible that in the future, drone-oriented combat will be both more common and more self-contained.

“There is a revolutionary effect of these systems, in that countries are essentially increasing the range of situations in which leaders are willing to deploy military force,” Lin-Greenberg says. To this point, though, he adds, “these confrontations are not necessarily escalating.”

Now Lin-Greenberg examines these dynamics in a new book, “The Remote Revolution: Drones and Modern Statecraft,” published by Cornell University Press. Lin-Greenberg is an associate professor in MIT’s Department of Political Science.

Lin-Greenberg brings a distinctive professional background to the subject of drone warfare. Before returning to graduate school, he served as a U.S. Air Force officer; today he commands a U.S. Air Force reserve squadron. His thinking is informed by his experiences as both a scholar and practitioner.

“The Remote Revolution” also has a distinctive methodology that draws on multiple ways of studying the topic. In writing the book, Lin-Greenberg conducted experiments based on war games played by national security professionals; conducted surveys of expert and public thinking about drones; developed in-depth case studies from history; and dug into archives broadly to fully understand the history of drone use, which in fact goes back several decades.

The book’s focus is drone use during the 2000s, as the technology has become more readily available; today about 100 countries have access to military drones. Many have used them during tensions and skirmishes with other countries.

“Where I argue this is actually revolutionary is during periods of crises, which fall below the threshold of war, in that these new technologies take human operators out of harm’s way and enable states to do things they wouldn’t otherwise do,” Lin-Greenberg says.

Indeed, a key point is that drones lower the costs of military action for countries — and not just financial costs, but human and political costs, too. Incidents and problems that might plague leaders if they involved military personnel, forcing major responses, seem to lessen when drones are involved.

“Because these systems don’t have a human on board, they’re inherently cheaper and different in the minds of decision-makers,” Lin-Greenberg says. “That means they’re willing to use these systems during disputes, and if other states are shooting them down, the side sending them is less likely to retaliate, because they’re losing a machine but not a man or woman on board.”

In this sense, the uses of drones “create new rungs on the escalation ladder,” as Lin-Greenberg writes in the book. Drone incidents don’t necessarily lead to wider military action, and may not even lead to the same kinds of international relations issues as incidents involving piloted aircraft.

Consider a counterfactual that Lin-Greenberg raises in the book. One of the most notorious episodes of Cold War tension between the U.S. and U.S.S.R. occurred in 1960, when U.S. pilot Gary Powers was shot down and captured in the Soviet Union, leading to a diplomatic standoff and a canceled summit between U.S. President Dwight Eisenhower and Soviet leader Nikita Khrushchev.

“Had that been a drone, it’s very likely the summit would have continued,” Lin-Greenberg says. “No one would have said anything. The Soviet Union would have been embarrassed to admit their airspace was violated and the U.S. would have just [publicly] ignored what was going on, because there would not have been anyone sitting in a prison. There are a lot of exercises where you can ask how history could have been different.”

None of this is to say that drones present straightforward solutions to international relations problems. They may present the appearance of low-cost military engagement, but as Lin-Greenberg underlines in the book, the effects are more complicated.

“To be clear, the remote revolution does not suggest that drones prevent war,” Lin-Greenberg writes. Indeed, one of the problems they raise, he emphasizes, is the “moral hazard” that arises from leaders viewing drones as less costly, which can lead to even more military confrontations.

Moreover, the trends in drone warfare so far yield predictions for the future that are “probabilistic rather than deterministic,” as Lin-Greenberg writes. Perhaps some political or military leaders will start to use drones to attack new targets that will inevitably generate major responses and quickly escalate into broad wars. Current trends do not guarantee future outcomes.

“There are a lot of unanswered questions in this area,” Lin-Greenberg says. “So much is changing. What does it look like when more drones are more autonomous? I still hope this book lays a foundation for future dicussions, even as drones are used in different ways.”

Other scholars have praised “The Remote Revolution.” Joshua Kertzer, a professor of international studies and government at Harvard University, has hailed Lin-Greenberg’s “rich expertise, methodological rigor, and creative insight,” while Michael Horowitz, a political scientist and professor of international relations at the University of Pennsylvania, has called it “an incredible book about the impact of drones on the international security environment.”

For his part, Lin-Greenberg says, “My hope is the book will be read by academics and practitioners and people who choose to focus on parts of it they’re interested in. I tried to write the book in way that’s approachable.”

Publication of the book was supported by funding from MIT’s Security Studies Program. 


New lightweight polymer film can prevent corrosion

Because it’s nearly impermeable to gases, the polymer coating developed by MIT engineers could be used to protect solar panels, machinery, infrastructure, and more.


MIT researchers have developed a lightweight polymer film that is nearly impenetrable to gas molecules, raising the possibility that it could be used as a protective coating to prevent solar cells and other infrastructure from corrosion, and to slow the aging of packaged food and medicines.

The polymer, which can be applied as a film mere nanometers thick, completely repels nitrogen and other gases, as far as can be detected by laboratory equipment, the researchers found. That degree of impermeability has never been seen before in any polymer, and rivals the impermeability of molecularly-thin crystalline materials such as graphene.

“Our polymer is quite unusual. It’s obviously produced from a solution-phase polymerization reaction, but the product behaves like graphene, which is gas-impermeable because it’s a perfect crystal. However, when you examine this material, one would never confuse it with a perfect crystal,” says Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering at MIT.

The polymer film, which the researchers describe today in Nature, is made using a process that can be scaled up to large quantities and applied to surfaces much more easily than graphene.

Strano and Scott Bunch, an associate professor of mechanical engineering at Boston University, are the senior authors of the new study. The paper’s lead authors are Cody Ritt, a former MIT postdoc who is now an assistant professor at the University of Colorado at Boulder; Michelle Quien, an MIT graduate student; and Zitang Wei, an MIT research scientist.

Bubbles that don’t collapse

Strano’s lab first reported the novel material — a two-dimensional polymer called a 2D polyaramid that self-assembles into molecular sheets using hydrogen bonds — in 2022. To create such 2D polymer sheets, which had never been done before, the researchers used a building block called melamine, which contains a ring of carbon and nitrogen atoms. Under the right conditions, these monomers can expand in two dimensions, forming nanometer-sized disks. These disks stack on top of each other, held together by hydrogen bonds between the layers, which make the structure very stable and strong.

That polymer, which the researchers call 2DPA-1, is stronger than steel but has only one-sixth the density of steel.

In their 2022 study, the researchers focused on testing the material’s strength, but they also did some preliminary studies of its gas permeability. For those studies, they created “bubbles” out of the films and filled them with gas. With most polymers, such as plastics, gas that is trapped inside will seep out through the material, causing the bubble to deflate quickly.

However, the researchers found that bubbles made of 2DPA-1 did not collapse — in fact, bubbles that they made in 2021 are still inflated. “I was quite surprised initially,” Ritt says. “The behavior of the bubbles didn’t follow what you’d expect for a typical, permeable polymer. This required us to rethink how to properly study and understand molecular transport across this new material.”  

“We set up a series of careful experiments to first prove that the material is molecularly impermeable to nitrogen,” Strano says. “It could be considered tedious work. We had to make micro-bubbles of the polymer and fill them with a pure gas like nitrogen, and then wait. We had to repeatedly check over an exceedingly long period of time that they weren’t collapsed, in order to report the record impermeability value.”

Traditional polymers allow gases through because they consist of a tangle of spaghetti-like molecules that are loosely joined together. This leaves tiny gaps between the strands. Gas molecules can seep through these gaps, which is why polymers always have at least some degree of gas permeability.

However, the new 2D polymer is essentially impermeable because of the way that the layers of disks stick to each other.

“The fact that they can pack flat means there’s no volume between the two-dimensional disks, and that’s unusual. With other polymers, there’s still space between the one-dimensional chains, so most polymer films allow at least a little bit of gas to get through,” Strano says.

George Schatz, a professor of chemistry and chemical and biological engineering at Northwestern University, described the results as “remarkable.”

“Normally polymers are reasonably permeable to gases, but the polyaramids reported in this paper are orders of magnitude less permeable to most gases under conditions with industrial relevance,” says Schatz, who was not involved in the study.

A protective coating

In addition to nitrogen, the researchers also exposed the polymer to helium, argon, oxygen, methane, and sulfur hexafluoride. They found that 2DPA-1’s permeability to those gases was at least 1/10,000 that of any other existing polymer. That makes it nearly as impermeable as graphene, which is completely impermeable to gases because of its defect-free crystalline structure.

Scientists have been working on developing graphene coatings as a barrier to prevent corrosion in solar cells and other devices. However, scaling up the creation of graphene films is difficult, in large part because they can’t be simply painted onto surfaces.

“We can only make crystal graphene in very small patches,” Strano says. “A little patch of graphene is molecularly impermeable, but it doesn’t scale. People have tried to paint it on, but graphene does not stick to itself but slides when sheared. Graphene sheets moving past each other are considered almost frictionless.”

On the other hand, the 2DPA-1 polymer sticks easily because of the strong hydrogen bonds between the layered disks. In this paper, the researchers showed that a layer just 60 nanometers thick could extend the lifetime of a perovskite crystal by weeks. Perovskites are materials that hold promise as cheap and lightweight solar cells, but they tend to break down much faster than the silicon solar panels that are now widely used.

A 60-nanometer coating extended the perovskite’s lifetime to about three weeks, but a thicker coating would offer longer protection, the researchers say. The films could also be applied to a variety of other structures.

“Using an impermeable coating such as this one, you could protect infrastructure such as bridges, buildings, rail lines — basically anything outside exposed to the elements. Automotive vehicles, aircraft and ocean vessels could also benefit. Anything that needs to be sheltered from corrosion. The shelf life of food and medications can also be extended using such materials,” Strano says.

The other application demonstrated in this paper is a nanoscale resonator — essentially a tiny drum that vibrates at a particular frequency. Larger resonators, with sizes around 1 millimeter or less, are found in cell phones, where they allow the phone to pick up the frequency bands it uses to transmit and receive signals.

“In this paper, we made the first polymer 2D resonator, which you can do with our material because it’s impermeable and quite strong, like graphene,” Strano says. “Right now, the resonators in your phone and other communications devices are large, but there’s an effort to shrink them using nanotechnology. To make them less than a micron in size would be revolutionary. Cell phones and other devices could be smaller and reduce the power expenditures needed for signal processing.”

Resonators can also be used as sensors to detect very tiny molecules, including gas molecules. 

The research was funded, in part, by the Center for Enhanced Nanofluidic Transport-Phase 2, an Energy Frontier Research Center funded by the U.S. Department of Energy Office of Science, as well as the National Science Foundation.

This research was carried out, in part, using MIT.nano’s facilities.


Teaching large language models how to absorb new knowledge

With a new method developed at MIT, an LLM behaves more like a student, writing notes that it studies to memorize new information.


In an MIT classroom, a professor lectures while students diligently write down notes they will reread later to study and internalize key information ahead of an exam.

Humans know how to learn new information, but large language models can’t do this in the same way. Once a fully trained LLM has been deployed, its “brain” is static and can’t permanently adapt itself to new knowledge.

This means that if a user tells an LLM something important today, it won’t remember that information the next time this person starts a new conversation with the chatbot.

Now, a new approach developed by MIT researchers enables LLMs to update themselves in a way that permanently internalizes new information. Just like a student, the LLM generates its own study sheets from a user’s input, which it uses to memorize the information by updating its inner workings.

The model generates multiple self-edits to learn from one input, then applies each one to see which improves its performance the most. This trial-and-error process teaches the model the best way to train itself.

The researchers found this approach improved the accuracy of LLMs at question-answering and pattern-recognition tasks, and it enabled a small model to outperform much larger LLMs.

While there are still limitations that must be overcome, the technique could someday help artificial intelligence agents consistently adapt to new tasks and achieve changing goals in evolving environments.   

“Just like humans, complex AI systems can’t remain static for their entire lifetimes. These LLMs are not deployed in static environments. They are constantly facing new inputs from users. We want to make a model that is a bit more human-like — one that can keep improving itself,” says Jyothish Pari, an MIT graduate student and co-lead author of a paper on this technique.

He is joined on the paper by co-lead author Adam Zweiger, an MIT undergraduate; graduate students Han Guo and Ekin Akyürek; and senior authors Yoon Kim, an associate professor in the Department of Electrical Engineering and Computer Science (EECS) and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL), and Pulkit Agrawal, an associate professor in EECS and member of CSAIL. The research will be presented at the Conference on Neural Information Processing Systems.

Teaching the model to learn

LLMs are neural network models that have billions of parameters, called weights, that contain the model’s knowledge and process inputs to make predictions. During training, the model adapts these weights to learn new information contained in its training data.

But once it is deployed, the weights are static and can’t be permanently updated anymore.

However, LLMs are very good at a process called in-context learning, in which a trained model learns a new task by seeing a few examples. These examples guide the model’s responses, but the knowledge disappears before the next conversation.

The MIT researchers wanted to leverage a model’s powerful in-context learning capabilities to teach it how to permanently update its weights when it encounters new knowledge.

The framework they developed, called SEAL for “self-adapting LLMs,” enables an LLM to generate new synthetic data based on an input, and then determine the best way to adapt itself and learn from that synthetic data. Each piece of synthetic data is a self-edit the model can apply.

In the case of language, the LLM creates synthetic data by rewriting the information, and its implications, in an input passage. This is similar to how students make study sheets by rewriting and summarizing original lecture content.

The LLM does this multiple times, then quizzes itself on each self-edit to see which led to the biggest boost in performance on a downstream task like question answering. It uses a trial-and-error method known as reinforcement learning, where it receives a reward for the greatest performance boost.

Then the model memorizes the best study sheet by updating its weights to internalize the information in that self-edit.

“Our hope is that the model will learn to make the best kind of study sheet — one that is the right length and has the proper diversity of information — such that updating the model based on it leads to a better model,” Zweiger explains.

Choosing the best method

Their framework also allows the model to choose the way it wants to learn the information. For instance, the model can select the synthetic data it wants to use, the rate at which it learns, and how many iterations it wants to train on.

In this case, not only does the model generate its own training data, but it also configures the optimization that applies that self-edit to its weights.

“As humans, we know how we learn best. We want to grant that same ability to large language models. By providing the model with the ability to control how it digests this information, it can figure out the best way to parse all the data that are coming in,” Pari says.

SEAL outperformed several baseline methods across a range of tasks, including learning a new skill from a few examples and incorporating knowledge from a text passage. On question answering, SEAL improved model accuracy by nearly 15 percent and on some skill-learning tasks, it boosted the success rate by more than 50 percent.

But one limitation of this approach is a problem called catastrophic forgetting: As the model repeatedly adapts to new information, its performance on earlier tasks slowly declines.

The researchers plan to mitigate catastrophic forgetting in future work. They also want to apply this technique in a multi-agent setting where several LLMs train each other.

“One of the key barriers to LLMs that can do meaningful scientific research is their inability to update themselves based on their interactions with new information. Though fully deployed self-adapting models are still far off, we hope systems able to learn this way could eventually overcome this and help advance science,” Zweiger says.

This work is supported, in part, by the U.S. Army Research Office, the U.S. Air Force AI Accelerator, the Stevens Fund for MIT UROP, and the MIT-IBM Watson AI Lab. 


Understanding the nuances of human-like intelligence

Associate Professor Phillip Isola studies the ways in which intelligent machines “think,” in an effort to safely integrate AI into human society.


What can we learn about human intelligence by studying how machines “think?” Can we better understand ourselves if we better understand the artificial intelligence systems that are becoming a more significant part of our everyday lives?

These questions may be deeply philosophical, but for Phillip Isola, finding the answers is as much about computation as it is about cogitation.

Isola, the newly tenured associate professor in the Department of Electrical Engineering and Computer Science (EECS), studies the fundamental mechanisms involved in human-like intelligence from a computational perspective.

While understanding intelligence is the overarching goal, his work focuses mainly on computer vision and machine learning. Isola is particularly interested in exploring how intelligence emerges in AI models, how these models learn to represent the world around them, and what their “brains” share with the brains of their human creators.

“I see all the different kinds of intelligence as having a lot of commonalities, and I’d like to understand those commonalities. What is it that all animals, humans, and AIs have in common?” says Isola, who is also a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL).

To Isola, a better scientific understanding of the intelligence that AI agents possess will help the world integrate them safely and effectively into society, maximizing their potential to benefit humanity.

Asking questions

Isola began pondering scientific questions at a young age.

While growing up in San Francisco, he and his father frequently went hiking along the northern California coastline or camping around Point Reyes and in the hills of Marin County.

He was fascinated by geological processes and often wondered what made the natural world work. In school, Isola was driven by an insatiable curiosity, and while he gravitated toward technical subjects like math and science, there was no limit to what he wanted to learn.

Not entirely sure what to study as an undergraduate at Yale University, Isola dabbled until he came upon cognitive sciences.

“My earlier interest had been with nature — how the world works. But then I realized that the brain was even more interesting, and more complex than even the formation of the planets. Now, I wanted to know what makes us tick,” he says.

As a first-year student, he started working in the lab of his cognitive sciences professor and soon-to-be mentor, Brian Scholl, a member of the Yale Department of Psychology. He remained in that lab throughout his time as an undergraduate.

After spending a gap year working with some childhood friends at an indie video game company, Isola was ready to dive back into the complex world of the human brain. He enrolled in the graduate program in brain and cognitive sciences at MIT.

“Grad school was where I felt like I finally found my place. I had a lot of great experiences at Yale and in other phases of my life, but when I got to MIT, I realized this was the work I really loved and these are the people who think similarly to me,” he says.

Isola credits his PhD advisor, Ted Adelson, the John and Dorothy Wilson Professor of Vision Science, as a major influence on his future path. He was inspired by Adelson’s focus on understanding fundamental principles, rather than only chasing new engineering benchmarks, which are formalized tests used to measure the performance of a system.

A computational perspective

At MIT, Isola’s research drifted toward computer science and artificial intelligence.

“I still loved all those questions from cognitive sciences, but I felt I could make more progress on some of those questions if I came at it from a purely computational perspective,” he says.

His thesis was focused on perceptual grouping, which involves the mechanisms people and machines use to organize discrete parts of an image as a single, coherent object.

If machines can learn perceptual groupings on their own, that could enable AI systems to recognize objects without human intervention. This type of self-supervised learning has applications in areas such autonomous vehicles, medical imaging, robotics, and automatic language translation.

After graduating from MIT, Isola completed a postdoc at the University of California at Berkeley so he could broaden his perspectives by working in a lab solely focused on computer science.

“That experience helped my work become a lot more impactful because I learned to balance understanding fundamental, abstract principles of intelligence with the pursuit of some more concrete benchmarks,” Isola recalls.

At Berkeley, he developed image-to-image translation frameworks, an early form of generative AI model that could turn a sketch into a photographic image, for instance, or turn a black-and-white photo into a color one.

He entered the academic job market and accepted a faculty position at MIT, but Isola deferred for a year to work at a then-small startup called OpenAI.

“It was a nonprofit, and I liked the idealistic mission at that time. They were really good at reinforcement learning, and I thought that seemed like an important topic to learn more about,” he says.

He enjoyed working in a lab with so much scientific freedom, but after a year Isola was ready to return to MIT and start his own research group.

Studying human-like intelligence

Running a research lab instantly appealed to him.

“I really love the early stage of an idea. I feel like I am a sort of startup incubator where I am constantly able to do new things and learn new things,” he says.

Building on his interest in cognitive sciences and desire to understand the human brain, his group studies the fundamental computations involved in the human-like intelligence that emerges in machines.

One primary focus is representation learning, or the ability of humans and machines to represent and perceive the sensory world around them.

In recent work, he and his collaborators observed that the many varied types of machine-learning models, from LLMs to computer vision models to audio models, seem to represent the world in similar ways.

These models are designed to do vastly different tasks, but there are many similarities in their architectures. And as they get bigger and are trained on more data, their internal structures become more alike.

This led Isola and his team to introduce the Platonic Representation Hypothesis (drawing its name from the Greek philosopher Plato) which says that the representations all these models learn are converging toward a shared, underlying representation of reality.

“Language, images, sound — all of these are different shadows on the wall from which you can infer that there is some kind of underlying physical process — some kind of causal reality — out there. If you train models on all these different types of data, they should converge on that world model in the end,” Isola says.

A related area his team studies is self-supervised learning. This involves the ways in which AI models learn to group related pixels in an image or words in a sentence without having labeled examples to learn from.

Because data are expensive and labels are limited, using only labeled data to train models could hold back the capabilities of AI systems. With self-supervised learning, the goal is to develop models that can come up with an accurate internal representation of the world on their own.

“If you can come up with a good representation of the world, that should make subsequent problem solving easier,” he explains.

The focus of Isola’s research is more about finding something new and surprising than about building complex systems that can outdo the latest machine-learning benchmarks.

While this approach has yielded much success in uncovering innovative techniques and architectures, it means the work sometimes lacks a concrete end goal, which can lead to challenges.

For instance, keeping a team aligned and the funding flowing can be difficult when the lab is focused on searching for unexpected results, he says.

“In a sense, we are always working in the dark. It is high-risk and high-reward work. Every once in while, we find some kernel of truth that is new and surprising,” he says.

In addition to pursuing knowledge, Isola is passionate about imparting knowledge to the next generation of scientists and engineers. Among his favorite courses to teach is 6.7960 (Deep Learning), which he and several other MIT faculty members launched four years ago.

The class has seen exponential growth, from 30 students in its initial offering to more than 700 this fall.

And while the popularity of AI means there is no shortage of interested students, the speed at which the field moves can make it difficult to separate the hype from truly significant advances.

“I tell the students they have to take everything we say in the class with a grain of salt. Maybe in a few years we’ll tell them something different. We are really on the edge of knowledge with this course,” he says.

But Isola also emphasizes to students that, for all the hype surrounding the latest AI models, intelligent machines are far simpler than most people suspect.

“Human ingenuity, creativity, and emotions — many people believe these can never be modeled. That might turn out to be true, but I think intelligence is fairly simple once we understand it,” he says.

Even though his current work focuses on deep-learning models, Isola is still fascinated by the complexity of the human brain and continues to collaborate with researchers who study cognitive sciences.

All the while, he has remained captivated by the beauty of the natural world that inspired his first interest in science.

Although he has less time for hobbies these days, Isola enjoys hiking and backpacking in the mountains or on Cape Cod, skiing and kayaking, or finding scenic places to spend time when he travels for scientific conferences.

And while he looks forward to exploring new questions in his lab at MIT, Isola can’t help but contemplate how the role of intelligent machines might change the course of his work.

He believes that artificial general intelligence (AGI), or the point where machines can learn and apply their knowledge as well as humans can, is not that far off.

“I don’t think AIs will just do everything for us and we’ll go and enjoy life at the beach. I think there is going to be this coexistence between smart machines and humans who still have a lot of agency and control. Now, I’m thinking about the interesting questions and applications once that happens. How can I help the world in this post-AGI future? I don’t have any answers yet, but it’s on my mind,” he says.