General news from the MIT - Massachusetts Institute of Technology University

Here you find the recent daily general news of the the MIT - Massachusetts Institute of Technology University

MIT News
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
What Americans actually think about taxes

Andrea Campbell’s new book shows that what we say we want on taxes doesn’t always match what we prefer in practice.


Doing your taxes can feel like a very complicated task. Even so, it might be less intricate than trying to make sense of what people think about taxes.

Several years ago, MIT political scientist Andrea Campbell undertook an expansive research project to understand public opinion about taxation. Her efforts have now reached fruition, in a new book uncovering many complexities about attitudes toward taxes. Those complexities include a central tension: In the U.S., most people say they support the principle of progressive taxation — in which higher earners pay higher shares of their income. Yet people also say they prefer specific forms of taxes that are regressive, hitting lower- and middle-income earners relatively harder.

For instance, state sales taxes are considered regressive, since people who make less money spend a larger percentage of their incomes, meaning sales taxes eat up a larger proportion of their earnings. But a substantial portion of the public still finds them to be fair, partly because the wealthy cannot wriggle out of them.

“At an abstract or conceptual level, people say they like progressive tax systems more than flat or regressive tax systems,” Campbell says. “But when you look at public attitudes toward specific taxes, people’s views flip upside down. People say federal and state income taxes are unfair, but they say sales taxes, which are very regressive, are fair. Their attitudes on individual taxes are the opposite of what their overall commitments are.”

Now Campbell analyzes these issues in detail in her book, “Taxation and Resentment,” just published by Princeton University Press. Campbell is the Arthur and Ruth Sloan Professor of Political Science at MIT and a former head of MIT’s  Department of Political Science.

Filling out the record

Campbell originally planned “Taxation and Resentment” as a strictly historically-oriented look at the subject. But the absence of any one book compiling public-opinion data in this area was striking. So, she assembled data going back to the end of World War II, and even designed and ran a couple of her own public research surveys, which help undergird the book’s numbers.

“Political scientists write a lot about public attitudes toward spending in the United States, but not so much about attitudes toward taxes,” Campbell says. “The public-opinion record is very thin.”

The complexities of U.S. public opinion on taxes are plainly linked to the presence of numerous forms of taxes, including federal and state income taxes, sales taxes, payroll taxes, estate taxes, and capital gains taxes. The best-known, of course, is the federal income tax, whose quirks and loopholes seem to irk citizens.

“That really seizes people’s imaginations,” Campbell says. “Keeping the focus on federal income tax has been a clever strategy among those who want to cut it. People think it’s unfair because they look at all the tax breaks the rich get and think, ‘I don’t have access to those.’ Those breaks increase complexity, undermine people’s knowledge, heighten their anger, and of course are in there because they help rich people pay less. So, there ends up being a cycle.”

That same sense of unfairness does not translate to all other forms of taxation, however. Large majorities of people have supported lowering the estate tax, for example, even though the threshold at which the federal estate tax kicks in — $13.5 million — applies to very few families.

Then too, the public seems to perceive sales taxes as being fair because of the simplicity and lack of loopholes — an understandable view, but one that ignores the way that state sales taxes, as opposed to state income taxes, place a bigger burden on middle-class and lower-income workers.

“A regressive tax like a sales tax is more difficult to comprehend,” Campbell says. “We all pay the same rate, so it seems like a flat tax, but as your income goes up, the bite of that tax goes down. And that’s just very difficult for people to understand.”

Overall, as Campbell details, income levels do not have huge predictive value when it comes to tax attitudes. Party affiliation also has less impact than many people might suspect — Democrats and Republicans differ on taxes, though not as much, in some ways, as political independents, who often have the most anti-tax views of all.

Meanwhile, Campbell finds, white Americans with heightened concerns about redistribution of public goods among varying demographic groups are more opposed to taxes than those who do not share those redistribution concerns. And Black and Hispanic Americans, who may wind up on the short end of regressive policies, also express significantly anti-tax perspectives, albeit while expressing more support for the state functions funded by taxation.

“There are so many factors and components of public opinion around taxes,” Campbell says. “Many political and demographic groups have their own reasons for disliking the status quo.”

How much does public opinion matter?

The research in “Taxation and Resentment” will be of high value to many kinds of scholars. However, as Campbell notes, political scientists do not have consensus about how much public opinion influences policy. Some experts contend that donors and lobbyists essentially determine policy while the larger public is ignored. But Campbell does not agree that public sentiment amounts to nothing. Consider, she says, the vigorous and successful public campaign to lower the estate tax in the first decade of the 2000s.

“If public opinion doesn’t matter, then why were there these PR campaigns to try to convince people the estate tax was bad for small businesses, farmers, and other groups?” Campbell asks. “Clearly it’s because public opinion does matter. It’s far easier to get these policies implemented if the public is on your side than if the public is in opposition. Public opinion is not the only factor in policymaking, but it’s a contributing factor.”

To be sure, even in the formation of public opinion, there are complexities and nuance, as Campbell notes in the book. A system of progressive taxation means the people taxed at the highest rate are the most motivated to oppose the system — and may heavily influence public opinion, in a top-down manner.

Scholars in the field have praised “Taxation and Resentment.” Martin Gilens, chair of the Department of Public Policy at the University of California at Los Angeles, has called it an “important and very welcome addition to the literature on public attitudes about public policies … with rich and often unexpected findings.” Vanessa Williamson, a senior fellow at the Brookings Institution, has said the book is “essential reading for anyone who wants to understand what Americans actually think about taxes. The scope of the data Campbell brings to bear on this question is unparalleled, and the depth of her analysis of public opinion across time and demography is a monumental achievement.”

For her part, Campbell says she hopes people in a variety of groups will read the book — including policymakers, scholars in multiple fields, and students. Certainly, she thinks, after studying the issue, more people could stand to know more about taxes.

“The tax system is complex,” Campbell says, “and people don’t always understand their own stakes. There is often a fog surrounding taxes.”


MIT launches a “moonshot for menstruation science”

The Fairbairn Menstruation Science Fund will allow researchers to accelerate the understanding and treatment of often-neglected diseases that tend to be more common in women.


The MIT Health and Life Sciences Collaborative (MIT HEALS) has announced the establishment of the Fairbairn Menstruation Science Fund, supporting a bold, high-impact initiative designed to revolutionize women’s health research.

Established through a gift from Emily and Malcolm Fairbairn, the fund will advance groundbreaking research on the function of the human uterus and its impact on sex-based differences in human immunology that contribute to gynecological disorders such as endometriosis, as well as other chronic systemic inflammatory diseases that disproportionately affect women, such as Lyme disease and lupus. The Fairbairns, based in the San Francisco Bay Area, have committed $10 million, with a call to action for an additional $10 million in matching funds.

“I’m deeply grateful to Emily and Malcolm Fairbairn for their visionary support of menstruation science at MIT. For too long, this area of research has lacked broad scientific investment and visibility, despite its profound impact on the health and lives of over half the population,” says Anantha P. Chandrakasan, MIT provost who was chief innovation and strategy officer and dean of engineering at the time of the gift, and Vannevar Bush Professor of Electrical Engineering and Computer Science.

Chandrakasan adds: “Thanks to groundbreaking work from researchers like Professor Linda Griffith and her team at the MIT Center for Gynepathology Research (CGR), we have an opportunity to advance our understanding and address critical challenges in menstruation science.”

Griffith, professor of biological and mechanical engineering and director of CGR, says the Fairbairn Fund will permit the illumination of “the enormous sex-based differences in human immunity” and advance next-generation drug-discovery technologies.

One main thrust of the new initiative will further the development of “organs on chips,” living models of patients. Using living cells or tissues, such devices allow researchers to replicate and experiment with interactions that can occur in the body. Griffith and an interdisciplinary team of researchers have engineered a powerful microfluidic platform that supports chips that foster growth of tissues complete with blood vessels and circulating immune cells. The technology was developed for building endometriosis lesions from individual patients with known clinical characteristics. The chip allows the researchers to do preclinical testing of drugs on the human patient-derived endometriosis model rather than on laboratory animals, which often do not menstruate naturally and whose immune systems function differently than that of humans.

The Fairbairn Fund will build the infrastructure for a “living patient avatar” facility to develop such physiomimetic models for all kinds of health conditions.

“We acknowledge that there are some big-picture phenomenological questions that one can study in animals, but human immunology is so very different,” Griffith says. “Pharma and biotech realize that we need living models of patients and the computational models of carefully curated patient data if we are to move into greater success in clinical trials.”

The computational models of patient data that Griffith refers to are a key element in choosing how to design the patient avatars and determine which therapeutics to test on them. For instance, by using systems biology analysis of inflammation in patient abdominal fluid, Griffith and her collaborators identified an intracellular enzyme called jun kinase (JNK). They are now working with a biotech company to test specific inhibitors of JNK in their model. Griffith has also collaborated with Michal “Mikki” Tal, a principal scientist in MIT’s Department of Biological Engineering, on investigating a possible link between prior infection, such as by the Lyme-causing bacterium Borrelia, and a number of chronic inflammatory diseases in women. Automating assays of patient samples for higher throughput could systematically speed the generation of hypotheses guiding the development of patient model experimentation.

“This fund is catalytic,” Griffith says. “Industry and government, along with other foundations, will invest if the foundational infrastructure exists. They want to employ the technologies, but it is hard to get them developed to the point they are proven to be useful. This gets us through that difficult part of the journey.”

The fund will also support public engagement efforts to reduce stigma around menstruation and neglect of such conditions as abnormal uterine bleeding and debilitating anemia, endometriosis, and polycystic ovary syndrome — and in general bring greater attention to women’s health research. Endometriosis, for instance, in which tissue that resembles the uterine lining starts growing outside the uterus and causes painful inflammation, affects one in 10 women. It often goes undiagnosed for years, and can require repeated surgeries to remove its lesions. Meanwhile, little is known about what causes it, how to prevent it, or what could effectively stop it.

Women’s health research could further advance in many areas of medicine beyond conditions that disproportionately affect females. Griffith points out that the uterus, which sheds and regenerates its lining every month, demonstrates “scarless healing” that could warrant investigation. Also, deepened study of the uterus could shed light on immune tolerance for transplants, given that in a successful pregnancy an implanted fetus is not rejected, despite containing foreign material from the biological father.

For Emily Fairbairn, the fund is a critical step toward major advances in an often-overlooked area of medicine.

“My mission is to support intellectually honest, open-minded scientists who embrace risk, treat failure as feedback, and remain committed to discovery over dogma. This fund is a direct extension of that philosophy. It’s designed to fuel research into the biological realities of diseases that remain poorly understood, frequently dismissed, or disproportionately misdiagnosed in women,” Fairbairn says. “I’ve chosen to make this gift to MIT because Linda Griffith exemplifies the rare combination of scientific integrity and bold innovation — qualities essential for tackling the most neglected challenges in medicine.”

Fairbairn also refers to Griffith collaborator Michal Tal as being “deeply inspiring.”

“Her work embodies what’s possible when scientific excellence meets institutional courage. It is this spirit — bold, rigorous, and fearless — that inspired this gift and fuels our hope for the future of women’s health,” she says.

Fairbairn, who has suffered from both Lyme disease and endometriosis that required multiple surgeries, originally directed her philanthropy, including previous gifts to MIT, toward the study of Lyme disease and associated infections.

“My own experience with both Lyme and endometriosis deepened my conviction that science must better account for how female physiology, genetics, and psychology differ from men’s,” she says. “MIT stands out for treating women’s health not as a niche, but as a frontier. The Institute’s willingness to bridge immunology, neurobiology, bioengineering, and data science — alongside its development of cutting-edge platforms like human chips — offers a rare and necessary seriousness of purpose.”

For her part, Griffith refers to Fairbairn as “a citizen scientist who inspires us daily.”

“Her tireless advocacy for patients, especially women, who are dismissed and gas-lit, is priceless,” Griffith adds. “Emily has made me a better scientist, in service of humanity.”


Model predicts long-term effects of nuclear waste on underground disposal systems

The simulations matched results from an underground lab experiment in Switzerland, suggesting modeling could be used to validate the safety of nuclear disposal sites.


As countries across the world experience a resurgence in nuclear energy projects, the questions of where and how to dispose of nuclear waste remain as politically fraught as ever. The United States, for instance, has indefinitely stalled its only long-term underground nuclear waste repository. Scientists are using both modeling and experimental methods to study the effects of underground nuclear waste disposal and ultimately, they hope, build public trust in the decision-making process.

New research from scientists at MIT, Lawrence Berkeley National Lab, and the University of Orléans makes progress in that direction. The study shows that simulations of underground nuclear waste interactions, generated by new, high-performance-computing software, aligned well with experimental results from a research facility in Switzerland.

The study, which was co-authored by MIT PhD student Dauren Sarsenbayev and Assistant Professor Haruko Wainwright, along with Christophe Tournassat and Carl Steefel, appears in the journal PNAS.

“These powerful new computational tools, coupled with real-world experiments like those at the Mont Terri research site in Switzerland, help us understand how radionuclides will migrate in coupled underground systems,” says Sarsenbayev, who is first author of the new study.

The authors hope the research will improve confidence among policymakers and the public in the long-term safety of underground nuclear waste disposal.

“This research — coupling both computation and experiments — is important to improve our confidence in waste disposal safety assessments,” says Wainwright. “With nuclear energy re-emerging as a key source for tackling climate change and ensuring energy security, it is critical to validate disposal pathways.”

Comparing simulations with experiments

Disposing of nuclear waste in deep underground geological formations is currently considered the safest long-term solution for managing high-level radioactive waste. As such, much effort has been put into studying the migration behaviors of radionuclides from nuclear waste within various natural and engineered geological materials.

Since its founding in 1996, the Mont Terri research site in northern Switzerland has served as an important test bed for an international consortium of researchers interested in studying materials like Opalinus clay — a thick, water-tight claystone abundant in the tunneled areas of the mountain.

“It is widely regarded as one of the most valuable real-world experiment sites because it provides us with decades of datasets around the interactions of cement and clay, and those are the key materials proposed to be used by countries across the world for engineered barrier systems and geological repositories for nuclear waste,” explains Sarsenbayev.

For their study, Sarsenbayev and Wainwright collaborated with co-authors Tournassat and Steefel, who have developed high-performance computing software to improve modeling of interactions between the nuclear waste and both engineered and natural materials.

To date, several challenges have limited scientists’ understanding of how nuclear waste reacts with cement-clay barriers. For one thing, the barriers are made up of irregularly mixed materials deep underground. Additionally, the existing class of models commonly used to simulate radionuclide interactions with cement-clay do not take into account electrostatic effects associated with the negatively charged clay minerals in the barriers.

Tournassat and Steefel’s new software accounts for electrostatic effects, making it the only one that can simulate those interactions in three-dimensional space. The software, called CrunchODiTi, was developed from established software known as CrunchFlow and was most recently updated this year. It is designed to be run on many high-performance computers at once in parallel.

For the study, the researchers looked at a 13-year-old experiment, with an initial focus on cement-clay rock interactions. Within the last several years, a mix of both negatively and positively charged ions were added to the borehole located near the center of the cement emplaced in the formation. The researchers focused on a 1-centimeter-thick zone between the radionuclides and cement-clay referred to as the “skin.” They compared their experimental results to the software simulation, finding the two datasets aligned.

“The results are quite significant because previously, these models wouldn’t fit field data very well,” Sarsenbayev says. “It’s interesting how fine-scale phenomena at the ‘skin’ between cement and clay, the physical and chemical properties of which changes over time, could be used to reconcile the experimental and simulation data.” 

The experimental results showed the model successfully accounted for electrostatic effects associated with the clay-rich formation and the interaction between materials in Mont Terri over time.

“This is all driven by decades of work to understand what happens at these interfaces,” Sarsenbayev says. “It’s been hypothesized that there is mineral precipitation and porosity clogging at this interface, and our results strongly suggest that.”

“This application requires millions of degrees of freedom because these multibarrier systems require high resolution and a lot of computational power,” Sarsenbayev says. “This software is really ideal for the Mont Terri experiment.”

Assessing waste disposal plans

The new model could now replace older models that have been used to conduct safety and performance assessments of underground geological repositories.

“If the U.S. eventually decides to dispose nuclear waste in a geological repository, then these models could dictate the most appropriate materials to use,” Sarsenbayev says. “For instance, right now clay is considered an appropriate storage material, but salt formations are another potential medium that could be used. These models allow us to see the fate of radionuclides over millennia. We can use them to understand interactions at timespans that vary from months to years to many millions of years.”

Sarsenbayev says the model is reasonably accessible to other researchers and that future efforts may focus on the use of machine learning to develop less computationally expensive surrogate models.

Further data from the experiment will be available later this month. The team plans to compare those data to additional simulations.

“Our collaborators will basically get this block of cement and clay, and they’ll be able to run experiments to determine the exact thickness of the skin along with all of the minerals and processes present at this interface,” Sarsenbayev says. “It’s a huge project and it takes time, but we wanted to share initial data and this software as soon as we could.”

For now, the researchers hope their study leads to a long-term solution for storing nuclear waste that policymakers and the public can support.

“This is an interdisciplinary study that includes real world experiments showing we’re able to predict radionuclides’ fate in the subsurface,” Sarsenbayev says. “The motto of MIT’s Department of Nuclear Science and Engineering is ‘Science. Systems. Society.’ I think this merges all three domains.”


Helping cities evolve

Economics graduate student Vincent Rollet studies how housing, regulation, and politics interact to shape the future of cities.


Growing up in Paris, Vincent Rollet was exposed to the world beyond France from an early age. His dad was an engineer who traveled around the globe to set up electrical infrastructure, and he moved the family to the United States for two years when Rollet was a small child. His father’s work sparked Rollet’s interest in international development and growth. “It made me want to see and learn how things work in other parts of the world,” he says.

Today, Rollet is a fifth-year PhD student in MIT’s Department of Economics, studying how cities evolve — and how they may become constrained by their past. “Cities constantly need to adapt to economic changes,” he explains. “For example, you might need more housing as populations grow, or want to transform manufacturing spaces into modern lab facilities. With the rise of remote work, many cities now have excess office space that could potentially become residential housing.” Ultimately, Rollet hopes his research can influence urban policymakers to better serve city residents.

A happy accident

Rollet’s first exposure to economics was almost accidental. As a teenager, he stumbled upon the lecture videos of a game theory course at Yale University. “I randomly clicked on the available courses,” he said, “and I watched the videos, and I found it interesting.”

In high school and college, he focused on math and physics. “It’s the kind of training you’re typically pushed to do in France,” he says. But at the end of his first year at École Polytechnique — mandatory military training for all students — he remembered the Yale course that he had watched in high school. He had spent that year helping run a military service program for disadvantaged youth. “I was looking for an enjoyable way to start studying again,” he says. “So I went back to game theory.”

Rollet decided to take a game theory course with an economics professor, Pierre Boyer, who would play a key role in his academic path. Through conversations with Boyer, Rollet learned that economics could provide a rigorous, mathematical approach to understanding the topics around international development and international politics that had long fascinated him. Boyer introduced Rollet to two MIT-trained economists, professors Vincent Pons and Benjamin Marx, with whom he continues to collaborate today. A research visit to the U.S. in 2019 to work with them solidified his interest in pursuing graduate school. Shortly thereafter, he began his PhD at MIT.

Why cities get “stuck”

Rollet’s research explores why cities struggle to adapt their built environments as economic conditions shift, and why certain urban spaces become “stuck” in outdated patterns of development. He’s drawn to cities because they are a microcosm of different interacting systems in economics. “To understand cities, you need to understand how labor markets work, how the housing market works, and how transportation works,” he notes.

Rollet has spent most of his PhD focusing on New York City. By examining detailed data on building permits, real estate transactions, rents, and zoning changes, he has tracked the evolution of every building in the city over nearly two decades, studying when and why developers choose to demolish buildings and construct new ones, and how these decisions are influenced by economic, regulatory, and technological constraints. By combining computational theory and data — which often includes information on natural experiments (i.e., What happens when a city changes a regulation?) — Rollet aims to reveal generalizable principles underlying how cities grow and evolve.

Originally shaped as a manufacturing hub with dense commercial centers and sprawling residential outskirts, New York’s physical structure has been largely frozen since zoning regulations were imposed in the 1960s. Despite dramatic shifts in population and economic activity, the city’s regulations have barely budged, creating profound mismatches: soaring housing costs, overcrowded residential areas, and underutilized commercial spaces. The buildings are expensive to replace, and regulations are notoriously hard to change once they are established.

Rollet’s findings reveal critical inefficiencies. In cities like New York or Boston, housing often sells for hundreds of thousands of dollars more than it costs to build. This large gap suggests that demand far outpaces supply: There simply aren’t enough homes being built. “When the housing supply is too constrained, we are effectively wasting resources, making housing unnecessarily expensive,” he explains.

But implementing any kind of action or policy to alleviate these inefficiencies has downstream effects. For example, it can have different impacts on different groups of people. “There will be winners and losers,” Rollet explains. “One reason is that you might directly care about the welfare of a certain group, like directly providing housing for lower-income households. Another reason is that if there are sufficiently many people who are losers of a certain policy, or if they’re sufficiently powerful, they’re going to be able to block the policy change, and this poses a political constraint.”

So what makes a city “stuck”? “Much of the time,” Rollet says, “it’s policy.” But the effects of policy changes take time to materialize and might be difficult for people to detect. Rollet cites Cambridge’s recent zoning reform allowing the construction of six-story buildings as a case in point. “These policy changes can benefit a lot of people, by reducing the housing prices a bit for everyone,” he says, “but individual people won’t know it. This makes collective action very hard.”

Economics, however, provides a toolkit to characterize and quantify these effects. “What economists can bring to the table is to give policymakers more information on the likely consequences of their policy actions,” Rollet says.

Striving to “improve things”

As Rollet enters the home stretch of his PhD, he’s grateful to his advisors in the economics department for helping him develop a foundation for the diverse set of tools necessary for his work. From professors Dave Donaldson and David Atkin, he learned how to adapt methods traditionally used in the study of international trade, to analyze the movement of people across neighborhoods and cities. From Professor Tobias Salz, he gained insights into modeling the behavior of firms over time, which he now applies to understanding the actions of real estate developers. “The training here pushes you to produce research that truly stands out,“ he says. “The courses helped me discover a new set of fields and methods.”

Beyond research, Rollet actively contributes to his department, including serving as the co-president of the Graduate Economics Association. “MIT is truly the best place for economics, not just because of their courses, but because it’s a really friendly department where people help each other out,” he says. “The Graduate Economics Association helps to build that sense of community, and I wanted to be a part of that.” In addition, he is a member of a mental health and peer support group in the department.

Rollet also enjoys teaching. He has been a teaching assistant for microeconomics and international trade courses and has built an impressive writing repertoire explaining complex concepts in several fields. In high school, one of Rollet’s hobbies was writing quantum theory explainers on the internet for general audiences. Some publishers found his writing and contacted him about turning it into a book. The book was published, and has sold more than 14,000 copies. As a college student, Rollet worked on two books: one on game theory for general audiences, and an intro to economics textbook that two professors recruited him to co-author. It’s still the standard textbook at École Polytechnique today. “It was my Covid activity,” Rollet laughs.

Looking forward, Rollet aims to pursue a career in research and teaching. His immediate goal remains clear: develop research that meaningfully impacts policy, by shedding light on how cities can overcome constraints and evolve in ways that better serve their residents. He’s excited about how, in the future, more fine-grained and detailed data sources could shed light on how micro behavior can lead to macro outcomes.

"Housing and cities — these markets are failing in important ways in many parts of the world. There’s real potential for policy to improve things.”


MIT’s Mason Estrada to sign with the Los Angeles Dodgers

The star pitcher has been studying aerospace engineering at MIT. Now his pitches, and career, will take flight in professional baseball.


Like almost any MIT student, Mason Estrada wants to take what he learned on campus and apply it to the working world.

Unlike any other MIT student, Estrada will soon be going to work on a pitcher’s mound, and some day Dodger Stadium might be his office.

Estrada, the star pitcher for MIT’s baseball team, is signing a contract with the Los Angeles Dodgers organization, after the team selected him in the 7th round of the Major League Baseball draft on July 14. The right-hander, whose stellar stuff earned significant attention from MLB scouts, will be reporting soon to the Dodgers’ instructional camp in Arizona.

“I’m definitely excited,” says Estrada, who was projected as a likely draft pick but did not know he would be selected by the Dodgers, Major League Baseball’s defending champions.

From the outside, MIT might seem like an atypical starting point for a pitching career, but it has helped Estrada in multiple ways: by providing a strong baseball program in itself, and, more subtly, by reinforcing the value of systematic improvement, at a time when baseball pitching increasingly resembles, well, engineering.

On the first count, Estrada praises his MIT coaches and teammates for the baseball environment they have helped provide.

“It was really awesome,” Estrada says about playing baseball at the Institute. “I was surrounded by a bunch of guys that wanted to win. There was a great team culture of grinding and working hard.”

Meanwhile, pitching in professional baseball more than ever involves “pitch design” or “pitch shaping.” For a decade now, major-league teams have used high-speed cameras to determine which pitches work best. In turn, pitchers are often reverse-engineering parts of their arsenals, by starting with the desired outcome, then finding the combination of velocity and movement to stymie hitters.

Into this setting, enter Estrada, an MIT aeronautics and astronautics major — although, he makes clear, pitching at MIT has never involved transferring aerodynamic knowledge from the classroom to the mound. Rather, what counts is using feedback and analysis to get better.

“It’s not necessarily based on the subject I was studying,” Estrada says. “It’s learning to think like an engineer generally, learning to think through problems the right way, and finding the best solution.”

This season, Estrada went 6-0 with a 2.21 ERA for MIT, striking out 66 and allowing a paltry 22 hits in 40 2/3 innings on the season. There are additional numbers that hint at his potential: Estrada’s fastball has hit 96 miles per hour, and he throws two types of sliders, with velocity in the upper 80s while producing up to 2,700 rotations per minute, in line with big-league metrics.

On the mound, Estrada uses his lower body to generate significant drive toward the plate — “I have to rely on my strength,” he says. Pitchers who share elements of this approach include Spencer Strider of the Atlanta Braves, although, Estrada emphasizes, “Everybody at the professional level is different.”

MIT’s baseball coaches praise Estrada’s dedication to the sport.

“Mason’s work ethic is through the roof,” says Todd Carroll, MIT’s pitching coach and recruiting coordinator, now in his 13th season at the Institute. Carroll thinks Estrada’s fastball and sliders could translate well to the professional game. The forward drive of Estrada’s motion, Carroll also notes, means that when Estrada delivers a pitch, “It’s on a hitter quick.”

Carroll concurs that the engineering mindset on campus actively helps players improve over time.

“MIT students are problem-solvers,” he says. “MIT is a place where people can do that as well as anywhere in the world. When a pitcher here misses the strike zone, that’s a problem they want to solve.”

Inevitably, all the off-field work, analysis, and preparation, is designed to let Estrada simply be himself on the diamond. For athletes, some parts of the brain are best put on pause when competing.

“In games, I’m just focused on getting the hitter out,” Estrada says. “I’m staying in the moment.”

As it happens, baseball’s relatively new world of pitch shaping and pitch design has been enabled by MIT-linked technology. The kind of high-speed video camera many teams use, the Edgertronic, is manufactured by Sanstreak Corp., founded by Mike Matter ’84, a graduate of what is now the Department of Electrical Engineering and Computer Science. If the camera name sounds familiar, it should: Matter named it in homage to Harold “Doc” Edgerton, the legendary MIT pioneer of high-speed photography, whom Matter counted as a mentor.

Estrada is the fifth MIT undergraduate selected in baseball’s draft, which dates to 1966, and the highest-drafted player in MIT history at 225th overall. The others are Alan Dopfel ’72, selected by the California Angels; Jason Szuminski ’00, drafted by the San Diego Padres; Austin Filiere ’18, picked by the Chicago Cubs; and David Hesslink ’17, chosen by the Seattle Mariners. Of those players, Szuminski reached the majors, with the Padres.

At least two major-league pitchers also earned MIT degrees after finishing long baseball careers: Chris Capuano MBA ’19, a former All-Star with the Brewers, who received his master’s degree in management as part of the MIT Sloan Fellows program, and Skip Lockwood SM ’83.

As a Dodger, Estrada joins an organization famed for great pitching: Since the team moved to Los Angeles in 1958, their star pitchers have included Sandy Koufax, Don Drysdale, Fernando Valenzuela, Orel Hershiser, and Clayton Kershaw.

Beyond that, the Dodgers are known for investing considerable resources in player development, staying on the leading edge of analytics while bulking up their staff in order to help players improve. They have won the World Series twice this decade, in 2020 and 2024.

Whatever happens on the diamond, Estrada wants to return to MIT to complete his degree. Before the draft, he had made plans to temporarily transfer to the University of Tennessee to play Division I baseball next season, with the plan of returning to MIT as a student. However, Estrada will not be doing that now that he is signing with the Dodgers.

As things now stand, Estrada is taking a leave of absence from the Institute while his professional career starts to unfold.

“I just want to be clear I’m very thankful to MIT and to the MIT baseball staff for all they’ve done,” Estrada says.

And now, campus experience in hand, Estrada is off to his very distinctive work environment. 


New tool gives anyone the ability to train a robot

MIT engineers designed a versatile interface that allows users to teach robots new skills in intuitive ways.


Teaching a robot new skills used to require coding expertise. But a new generation of robots could potentially learn from just about anyone.

Engineers are designing robotic helpers that can “learn from demonstration.” This more natural training strategy enables a person to lead a robot through a task, typically in one of three ways: via remote control, such as operating a joystick to remotely maneuver a robot; by physically moving the robot through the motions; or by performing the task themselves while the robot watches and mimics.

Learning-by-doing robots usually train in just one of these three demonstration approaches. But MIT engineers have now developed a three-in-one training interface that allows a robot to learn a task through any of the three training methods. The interface is in the form of a handheld, sensor-equipped tool that can attach to many common collaborative robotic arms. A person can use the attachment to teach a robot to carry out a task by remotely controlling the robot, physically manipulating it, or demonstrating the task themselves — whichever style they prefer or best suits the task at hand.

The MIT team tested the new tool, which they call a “versatile demonstration interface,” on a standard collaborative robotic arm. Volunteers with manufacturing expertise used the interface to perform two manual tasks that are commonly carried out on factory floors.

The researchers say the new interface offers increased training flexibility that could expand the type of users and “teachers” who interact with robots. It may also enable robots to learn a wider set of skills. For instance, a person could remotely train a robot to handle toxic substances, while further down the production line another person could physically move the robot through the motions of boxing up a product, and at the end of the line, someone else could use the attachment to draw a company logo as the robot watches and learns to do the same.

“We are trying to create highly intelligent and skilled teammates that can effectively work with humans to get complex work done,” says Mike Hagenow, a postdoc at MIT in the Department of Aeronautics and Astronautics. “We believe flexible demonstration tools can help far beyond the manufacturing floor, in other domains where we hope to see increased robot adoption, such as home or caregiving settings.”

Hagenow will present a paper detailing the new interface, at the IEEE Intelligent Robots and Systems (IROS) conference in October. The paper’s MIT co-authors are Dimosthenis Kontogiorgos, a postdoc at the MIT Computer Science and Artificial Intelligence Lab (CSAIL); Yanwei Wang PhD ’25, who recently earned a doctorate in electrical engineering and computer science; and Julie Shah, MIT professor and head of the Department of Aeronautics and Astronautics.

Training together

Shah’s group at MIT designs robots that can work alongside humans in the workplace, in hospitals, and at home. A main focus of her research is developing systems that enable people to teach robots new tasks or skills “on the job,” as it were. Such systems would, for instance, help a factory floor worker quickly and naturally adjust a robot’s maneuvers to improve its task in the moment, rather than pausing to reprogram the robot’s software from scratch — a skill that a worker may not necessarily have.

The team’s new work builds on an emerging strategy in robot learning called “learning from demonstration,” or LfD, in which robots are designed to be trained in more natural, intuitive ways. In looking through the LfD literature, Hagenow and Shah found LfD training methods developed so far fall generally into the three main categories of teleoperation, kinesthetic training, and natural teaching.

One training method may work better than the other two for a particular person or task. Shah and Hagenow wondered whether they could design a tool that combines all three methods to enable a robot to learn more tasks from more people.

“If we could bring together these three different ways someone might want to interact with a robot, it may bring benefits for different tasks and different people,” Hagenow says.

Tasks at hand

With that goal in mind, the team engineered a new versatile demonstration interface (VDI). The interface is a handheld attachment that can fit onto the arm of a typical collaborative robotic arm. The attachment is equipped with a camera and markers that track the tool’s position and movements over time, along with force sensors to measure the amount of pressure applied during a given task.

When the interface is attached to a robot, the entire robot can be controlled remotely, and the interface’s camera records the robot’s movements, which the robot can use as training data to learn the task on its own. Similarly, a person can physically move the robot through a task, with the interface attached. The VDI can also be detached and physically held by a person to perform the desired task. The camera records the VDI’s motions, which the robot can also use to mimic the task when the VBI is reattached.

To test the attachment’s usability, the team brought the interface, along with a collaborative robotic arm, to a local innovation center where manufacturing experts learn about and test technology that can improve factory-floor processes. The researchers set up an experiment where they asked volunteers at the center to use the robot and all three of the interface’s training methods to complete two common manufacturing tasks: press-fitting and molding. In press-fitting, the user trained the robot to press and fit pegs into holes, similar to many fastening tasks. For molding, a volunteer trained the robot to push and roll a rubbery, dough-like substance evenly around the surface of a center rod, similar to some thermomolding tasks.

For each of the two tasks, the volunteers were asked to use each of the three training methods, first teleoperating the robot using a joystick, then kinesthetically manipulating the robot, and finally, detaching the robot’s attachment and using it to “naturally” perform the task as the robot recorded the attachment’s force and movements.

The researchers found the volunteers generally preferred the natural method over teleoperation and kinesthetic training. The users, who were all experts in manufacturing, did offer scenarios in which each method might have advantages over the others. Teleoperation, for instance, may be preferable in training a robot to handle hazardous or toxic substances. Kinesthetic training could help workers adjust the positioning of a robot that is tasked with moving heavy packages. And natural teaching could be beneficial in demonstrating tasks that involve delicate and precise maneuvers.

“We imagine using our demonstration interface in flexible manufacturing environments where one robot might assist across a range of tasks that benefit from specific types of demonstrations,” says Hagenow, who plans to refine the attachment’s design based on user feedback and will use the new design to test robot learning. “We view this study as demonstrating how greater flexibility in collaborative robots can be achieved through interfaces that expand the ways that end-users interact with robots during teaching.”

This work was supported, in part, by the MIT Postdoctoral Fellowship Program for Engineering Excellence and the Wallenberg Foundation Postdoctoral Research Fellowship.


This “smart coach” helps LLMs switch between text and code

The CodeSteer system could boost large language models’ accuracy when solving complex problems, such as scheduling shipments in a supply chain.


Large language models (LLMs) excel at using textual reasoning to understand the context of a document and provide a logical answer about its contents. But these same LLMs often struggle to correctly answer even the simplest math problems.

Textual reasoning is usually a less-than-ideal way to deliberate over computational or algorithmic tasks. While some LLMs can generate code like Python to handle symbolic queries, the models don’t always know when to use code, or what kind of code would work best.

LLMs, it seems, may need a coach to steer them toward the best technique.

Enter CodeSteer, a smart assistant developed by MIT researchers that guides an LLM to switch between code and text generation until it correctly answers a query.

CodeSteer, itself a smaller LLM, automatically generates a series of prompts to iteratively steer a larger LLM. It reviews the model’s current and previous answers after each round and provides guidance for how it can fix or refine that solution until it deems the answer is correct.

The researchers found that augmenting a larger LLM with CodeSteer boosted its accuracy on symbolic tasks, like multiplying numbers, playing Sudoku, and stacking blocks, by more than 30 percent. It also enabled less sophisticated models to outperform more advanced models with enhanced reasoning skills.

This advance could improve the problem-solving capabilities of LLMs for complex tasks that are especially difficult to solve with textual reasoning alone, such as generating paths for robots in uncertain environments or scheduling shipments in an international supply chain.

“There is a race to develop better and better models that are capable of doing everything, but we’ve taken a complementary approach. Researchers have spent years developing effective technologies and tools to tackle problems in many domains. We want to enable LLMs to select the right tools and methods, and make use of others’ expertise to enhance their own capabilities,” says Chuchu Fan, an associate professor of aeronautics and astronautics (AeroAstro) and principal investigator in the MIT Laboratory for Information and Decision Systems (LIDS).

Fan, the senior author of the study, is joined on a paper about the work by LIDS graduate student Yongchao Chen; AeroAstro graduate student Yilun Hao; University of Illinois at Urbana-Champaign graduate student Yueying Liu; and MIT-IBM Watson AI Lab Research Scientist Yang Zhang. The research will be presented at the International Conference on Machine Learning.

An LLM “trainer”  

Ask an LLM which number is bigger, 9.11 or 9.9, and it will often give the wrong answer by using textual reasoning. But ask it to use code to answer the same question, and it can generate and execute a Python script to compare the two numbers, easily solving the problem.

Initially trained to understand and predict human language, LLMs are more likely to answer queries using text, even when code would be more effective. And while they have learned to generate code through fine-tuning, these models often generate an incorrect or less efficient version of the code.

Rather than trying to retrain a powerful LLM like GPT-4 or Claude to improve these capabilities, the MIT researchers fine-tune a smaller, lightweight LLM to guide a larger model between text and code. Fine-tuning a smaller model doesn’t change the larger LLM, so there is no risk it would undermine the larger model’s other abilities.

“We were also inspired by humans. In sports, a trainer may not be better than the star athlete on the team, but the trainer can still give helpful suggestions to guide the athlete. This steering method works for LLMs, too,” Chen says.

This trainer, CodeSteer, works in conjunction with the larger LLM. It first reviews a query and determines whether text or code is suitable for this problem, and which sort of code would be best.

Then it generates a prompt for the larger LLM, telling it to use a coding method or textual reasoning to answer the query. The larger model follows this prompt to answer the query and sends the result back to CodeSteer, which reviews it.

If the answer is not correct, CodeSteer will continue prompting the LLM to try different things that might fix the problem, such as incorporating a search algorithm or constraint into its Python code, until the answer is correct.

“We found that oftentimes, the larger LLM will try to be lazy and use a shorter, less efficient code that will not carry the correct symbolic calculation. We’ve designed CodeSteer to avoid this phenomenon,” Chen says.

A symbolic checker evaluates the code’s complexity and sends a signal to CodeSteer if it is too simple or inefficient. The researchers also incorporate a self-answer checker into CodeSteer, which prompts the LLM to generate code that calculates the answer to verify it is correct.

Tackling complex tasks

As the researchers designed CodeSteer, they couldn’t find suitable symbolic datasets to fine-tune and test the model, since many existing benchmarks don’t point out whether a certain query could be best solved with text or code.

So, they gathered a corpus of 37 complex symbolic tasks, including spatial reasoning, mathematics, order reasoning, and optimization, and built their own dataset, called SymBench. They implemented a fine-tuning approach that leverages SymBench to maximize the performance of CodeSteer.

In their experiments, CodeSteer outperformed all nine baseline methods they evaluated and boosted average accuracy from 53.3 percent to 86.4 percent. It maintains similar performance even on unseen tasks, and on a variety of LLMs.

In addition, a general-purpose model augmented with CodeSteer can achieve higher accuracy than state-of-the-art models designed to focus on complex reasoning and planning, while requiring much less computation.

“Our method uses an LLM’s own capabilities. By augmenting an LLM with the ability to smartly use coding, we can take a model that is already very strong and improve its performance even more,” Chen says.

In the future, the researchers want to streamline CodeSteer to speed up its iterative prompting process. In addition, they are studying how to effectively fine-tune a unified model with the ability to switch between textual reasoning and code generation, rather than relying on a separate assistant.

“The authors present an elegant solution to the critical challenge of tool utilization in LLMs. This simple yet impactful method enables state-of-the-art LLMs to achieve significant performance improvements without requiring direct fine-tuning,” says Jinsung Yoon, a staff research scientist at Google Cloud AI, who was not involved with this work. “This research represents a substantial contribution that promises to significantly enhance the application of LLMs to a diverse range of tasks with which they currently struggle.”

“Their success in training a smaller, specialized model to strategically guide larger, advanced models is particularly impactful,” adds Chi Wang, a senior staff scientist at Google DeepMind who was not involved with this work. “This intelligent collaboration among diverse AI ‘agents’ paves the way for more robust and versatile applications in complex real-world scenarios.”

This research is supported, in part, by the U.S. Office of Naval Research and the MIT-IBM Watson AI Lab.


Can AI really code? Study maps the roadblocks to autonomous software engineering

A team of researchers has mapped the challenges of AI in software development, and outlined a research agenda to move the field forward.


Imagine a future where artificial intelligence quietly shoulders the drudgery of software development: refactoring tangled code, migrating legacy systems, and hunting down race conditions, so that human engineers can devote themselves to architecture, design, and the genuinely novel problems still beyond a machine’s reach. Recent advances appear to have nudged that future tantalizingly close, but a new paper by researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and several collaborating institutions argues that this potential future reality demands a hard look at present-day challenges. 

Titled “Challenges and Paths Towards AI for Software Engineering,” the work maps the many software-engineering tasks beyond code generation, identifies current bottlenecks, and highlights research directions to overcome them, aiming to let humans focus on high-level design while routine work is automated. 

“Everyone is talking about how we don’t need programmers anymore, and there’s all this automation now available,” says Armando Solar‑Lezama, MIT professor of electrical engineering and computer science, CSAIL principal investigator, and senior author of the study. “On the one hand, the field has made tremendous progress. We have tools that are way more powerful than any we’ve seen before. But there’s also a long way to go toward really getting the full promise of automation that we would expect.”

Solar-Lezama argues that popular narratives often shrink software engineering to “the undergrad programming part: someone hands you a spec for a little function and you implement it, or solving LeetCode-style programming interviews.” Real practice is far broader. It includes everyday refactors that polish design, plus sweeping migrations that move millions of lines from COBOL to Java and reshape entire businesses. It requires nonstop testing and analysis — fuzzing, property-based testing, and other methods — to catch concurrency bugs, or patch zero-day flaws. And it involves the maintenance grind: documenting decade-old code, summarizing change histories for new teammates, and reviewing pull requests for style, performance, and security.

Industry-scale code optimization — think re-tuning GPU kernels or the relentless, multi-layered refinements behind Chrome’s V8 engine — remains stubbornly hard to evaluate. Today’s headline metrics were designed for short, self-contained problems, and while multiple-choice tests still dominate natural-language research, they were never the norm in AI-for-code. The field’s de facto yardstick, SWE-Bench, simply asks a model to patch a GitHub issue: useful, but still akin to the “undergrad programming exercise” paradigm. It touches only a few hundred lines of code, risks data leakage from public repositories, and ignores other real-world contexts — AI-assisted refactors, human–AI pair programming, or performance-critical rewrites that span millions of lines. Until benchmarks expand to capture those higher-stakes scenarios, measuring progress — and thus accelerating it — will remain an open challenge.

If measurement is one obstacle, human‑machine communication is another. First author Alex  Gu, an MIT graduate student in electrical engineering and computer science, sees today’s interaction as “a thin line of communication.” When he asks a system to generate code, he often receives a large, unstructured file and even a set of unit tests, yet those tests tend to be superficial. This gap extends to the AI’s ability to effectively use the wider suite of software engineering tools, from debuggers to static analyzers, that humans rely on for precise control and deeper understanding. “I don’t really have much control over what the model writes,” he says. “Without a channel for the AI to expose its own confidence — ‘this part’s correct … this part, maybe double‑check’ — developers risk blindly trusting hallucinated logic that compiles, but collapses in production. Another critical aspect is having the AI know when to defer to the user for clarification.” 

Scale compounds these difficulties. Current AI models struggle profoundly with large code bases, often spanning millions of lines. Foundation models learn from public GitHub, but “every company’s code base is kind of different and unique,” Gu says, making proprietary coding conventions and specification requirements fundamentally out of distribution. The result is code that looks plausible yet calls non‑existent functions, violates internal style rules, or fails continuous‑integration pipelines. This often leads to AI-generated code that “hallucinates,” meaning it creates content that looks plausible but doesn’t align with the specific internal conventions, helper functions, or architectural patterns of a given company. 

Models will also often retrieve incorrectly, because it retrieves code with a similar name (syntax) rather than functionality and logic, which is what a model might need to know how to write the function. “Standard retrieval techniques are very easily fooled by pieces of code that are doing the same thing but look different,” says Solar‑Lezama. 

The authors mention that since there is no silver bullet to these issues, they’re calling instead for community‑scale efforts: richer, having data that captures the process of developers writing code (for example, which code developers keep versus throw away, how code gets refactored over time, etc.), shared evaluation suites that measure progress on refactor quality, bug‑fix longevity, and migration correctness; and transparent tooling that lets models expose uncertainty and invite human steering rather than passive acceptance. Gu frames the agenda as a “call to action” for larger open‑source collaborations that no single lab could muster alone. Solar‑Lezama imagines incremental advances—“research results taking bites out of each one of these challenges separately”—that feed back into commercial tools and gradually move AI from autocomplete sidekick toward genuine engineering partner.

“Why does any of this matter? Software already underpins finance, transportation, health care, and the minutiae of daily life, and the human effort required to build and maintain it safely is becoming a bottleneck. An AI that can shoulder the grunt work — and do so without introducing hidden failures — would free developers to focus on creativity, strategy, and ethics” says Gu. “But that future depends on acknowledging that code completion is the easy part; the hard part is everything else. Our goal isn’t to replace programmers. It’s to amplify them. When AI can tackle the tedious and the terrifying, human engineers can finally spend their time on what only humans can do.”

“With so many new works emerging in AI for coding, and the community often chasing the latest trends, it can be hard to step back and reflect on which problems are most important to tackle,” says Baptiste Rozière, an AI scientist at Mistral AI, who wasn’t involved in the paper. “I enjoyed reading this paper because it offers a clear overview of the key tasks and challenges in AI for software engineering. It also outlines promising directions for future research in the field.”

Gu and Solar-Lezama wrote the paper with University of California at Berkeley Professor Koushik Sen and PhD students Naman Jain and Manish Shetty, Cornell University Assistant Professor Kevin Ellis and PhD student Wen-Ding Li, Stanford University Assistant Professor Diyi Yang and PhD student Yijia Shao, and incoming Johns Hopkins University assistant professor Ziyang Li. Their work was supported, in part, by the National Science Foundation (NSF), SKY Lab industrial sponsors and affiliates, Intel Corp. through an NSF grant, and the Office of Naval Research.

The researchers are presenting their work at the International Conference on Machine Learning (ICML). 


What do we owe each other?

A new class teaches MIT students how to navigate a fast-changing world with a moral compass.


MIT equips students with the tools to advance science and engineering — but a new class aims to ensure they also develop their own values and learn how to navigate conflicting viewpoints.

Offered as a pilot this past spring, the multidisciplinary class 21.01 (Compass Course: Love, Death, and Taxes: How to Think — and Talk to Others — About Being Human), invites students to wrestle with difficult questions like:

The class is part of the Compass Initiative, which is led by faculty from across the MIT School of Humanities, Arts, and Social Sciences (SHASS). 

Lily L. Tsai, Ford Professor of Political Science and lead faculty for Compass, says the new course is meant to help students use the humanities and social sciences as their guide to thinking about the kind of humans they want to be and what kind of society they want to help create.

"At MIT, we're some of the people who are creating the technologies that are accelerating change and leading to more unpredictability in the world. We have a special responsibility to envision and reimagine a moral and civic education that enables people to navigate it," says Tsai.

The course is the result of a multi-year collaboration involving over 30 faculty from 19 departments, ranging from Philosophy and Literature to Brain and Cognitive Sciences and Electrical Engineering and Computer Science, all led by a core team of 14 faculty from SHASS and a student advisory board.

During its initial run in the spring, Compass followed an arc that began with students investigating questions of value. Early in the semester, students explored what makes a genius, using Beethoven's "Symphony No. 9" as a case study, accompanied by lectures from Emily Richmond Pollock, associate professor of music, and a podcast conversation with Larry Guth, professor of mathematics, and David Kaiser, professor of physics and science, technology, and society. 

Students then grappled with the concept of a merit-based society by digging into the example of the imperial Chinese civil service exam, guided by professor of history Tristan Brown. Next, they questioned what humans really know to be true by examining the universality of language through lectures by professor of linguistics Adam Albright, and the philosophy of truth and knowledge through lectures by professor of philosophy Alex Byrne.

The semester ended with challenging debates about what humans owe one another, including a class designed by Nobel laureate and professor of economics Esther Duflo on taxation and climate burdens. 

More than anything, Tsai says, she hopes that Compass prepares students to navigate dorm hallways, the family Thanksgiving table, or future labs or boardroom tables, and learn how to express opinions and actively listen to others with whom they may disagree — all without canceling one another. 

The class takes a "flipped classroom" approach: Students watch recorded lectures at home and come to class prepared for discussion and debate. Each section is co-taught by two faculty members, combining disciplines and perspectives.

Second-year mechanical engineering major Kayode Dada signed up because it fulfilled a communications-intensive requirement and offered cross-departmental exposure. But Compass ultimately became more than that to him. "College isn't just about learning science stuff — it's also about how we grow as people," he says. Dada was assigned to a section co-taught by Tsai and professor of literature Arthur Bahr. 

Forming a social contract

In the first week, students draft a Rousseau-inspired social compact and learn firsthand how to build a classroom community. "We knew these were deep topics," Dada says. "To get the most out of the class, we had to open up, respect each other, and keep conversations confidential."

One early exercise was especially impactful. After watching lectures by Ford Professor of Philosophy and Women’s and Gender Studies Sally Haslanger on value, students were asked to draw a map representing their values, with arrows pointing from ones that were more instrumental to ones that were fundamental.

At first, Dada felt stuck. Growing up in Kentucky, the son of a Nigerian immigrant who had dreamed of attending MIT himself, Dada had focused for years on gaining admission to the Institute. "I thought getting into MIT would make me feel fulfilled," he admits. "But once I got here, I realized the work alone wasn't enough."

The values exercise helped him reorient. He identified practicing Christianity, hard work, helping others, and contributing to society as central to his belief system. The exercise influenced Dada, leading him to choose to volunteer at a robotics camp for kids in Louisville to share his MIT education with others.

Who governs science? 

Later in the semester, Dada was animatedly representing a figure whose views contradicted his own: James D. Watson, the Nobel Prize winner who co-discovered DNA's structure — and is also a controversial figure. 

That week, each student had been assigned a persona from a 1976 Cambridge City Council hearing debating recombinant DNA research. The class, designed by Associate Professor Robin Scheffler, was investigating the question: Who governs science — scientists, the government, those who fund research, or the public?

They revisited a real-life debate around recombinant DNA research and the dangers for biological weapons development and other threats to the public that citizens of that time believed it posed when carried out in MIT and Harvard University labs. Pioneered in the 1970s, the technique involved the splicing of genes related to the E. coli bacterium. In the Compass classroom, students argued different sides from their personas: banning the research, moving labs outside city limits, or proceeding without government interference.

Dada notes how faculty intentionally seeded conflicting viewpoints. "It taught me how to negotiate with someone who has different values and come to a resolution that respects everyone involved," he says. "That's something I want to keep exploring."

When Dada closed his presentation with frantically-Googled sentimental music piped unexpectedly from his phone, his classmates laughed in appreciation. The atmosphere was more intimate than academic — an ethos Tsai hoped to cultivate. "They really built intellectual relationships based on trust," she says. "There was a lot of laughter. They took joy in disagreeing and debating."

Changing opinions 

First-year student-athlete Shannon Cordle, who is majoring in mechanical engineering, didn't know what to expect from Compass. Since it was new, there were no student reviews. What stood out to her was the grading system: 15 percent of the final grade is based on a rubric each student created for themselves.

Cordle's goal was to become more comfortable expressing an opinion — even before she's fully formed it. "It's easy to stay quiet when you're unsure," she says. "Compass helped me practice speaking up and being willing to be wrong, because that's how you learn."

One week, the class debated whether a meritocracy creates a just society — an especially relevant topic at MIT, given its famously selective admissions process. 

Students were able to pick their stance beforehand, and then invited to change it as they gained more perspectives during the debate.

"This helps students grasp not only the flaws in another viewpoint, but also how to strengthen their arguments," Tsai says.

Cordle, who hopes to go into prosthetics, views her future field as representing the perfect balance between creativity and ethics. "The humanities challenge how we view our fields as scientists and engineers," she says.

A compass helps travelers find their way — but it's most useful when they need to reorient and change direction. In that spirit, Compass prepares students not just to ask big questions, but to keep asking — and keep adapting — as their lives and careers evolve.

“Bringing these unexpected class elements together with students and faculty generated magical alchemy — a kind of transformation that we didn't even know we could create,” Tsai says.

In addition to the class, the MIT Compass Podcast engages in these fundamental questions with guests from across the MIT schools of Science and Engineering. There are also plans to adapt the residential version of this class for online learners on MITx.

In addition to philanthropic support from MIT Corporation life member emeritus Ray Stata '57, the initiative is supported by the Office of the Vice Chancellor and the MIT Human Insight Collaborative's SHASS Education Innovation Fund, which promotes new, transformative educational approaches in SHASS fields.


How to more efficiently study complex treatment interactions

A new approach for testing multiple treatment combinations at once could help scientists develop drugs for cancer or genetic disorders.


MIT researchers have developed a new theoretical framework for studying the mechanisms of treatment interactions. Their approach allows scientists to efficiently estimate how combinations of treatments will affect a group of units, such as cells, enabling a researcher to perform fewer costly experiments while gathering more accurate data.

As an example, to study how interconnected genes affect cancer cell growth, a biologist might need to use a combination of treatments to target multiple genes at once. But because there could be billions of potential combinations for each round of the experiment, choosing a subset of combinations to test might bias the data their experiment generates. 

In contrast, the new framework considers the scenario where the user can efficiently design an unbiased experiment by assigning all treatments in parallel, and can control the outcome by adjusting the rate of each treatment.

The MIT researchers theoretically proved a near-optimal strategy in this framework and performed a series of simulations to test it in a multiround experiment. Their method minimized the error rate in each instance.

This technique could someday help scientists better understand disease mechanisms and develop new medicines to treat cancer or genetic disorders.

“We’ve introduced a concept people can think more about as they study the optimal way to select combinatorial treatments at each round of an experiment. Our hope is this can someday be used to solve biologically relevant questions,” says graduate student Jiaqi Zhang, an Eric and Wendy Schmidt Center Fellow and co-lead author of a paper on this experimental design framework.

She is joined on the paper by co-lead author Divya Shyamal, an MIT undergraduate; and senior author Caroline Uhler, the Andrew and Erna Viterbi Professor of Engineering in EECS and the MIT Institute for Data, Systems, and Society (IDSS), who is also director of the Eric and Wendy Schmidt Center and a researcher at MIT’s Laboratory for Information and Decision Systems (LIDS). The research was recently presented at the International Conference on Machine Learning.

Simultaneous treatments

Treatments can interact with each other in complex ways. For instance, a scientist trying to determine whether a certain gene contributes to a particular disease symptom may have to target several genes simultaneously to study the effects.

To do this, scientists use what are known as combinatorial perturbations, where they apply multiple treatments at once to the same group of cells.

“Combinatorial perturbations will give you a high-level network of how different genes interact, which provides an understanding of how a cell functions,” Zhang explains.

Since genetic experiments are costly and time-consuming, the scientist aims to select the best subset of treatment combinations to test, which is a steep challenge due to the huge number of possibilities.

Picking a suboptimal subset can generate biased results by focusing only on combinations the user selected in advance.

The MIT researchers approached this problem differently by looking at a probabilistic framework. Instead of focusing on a selected subset, each unit randomly takes up combinations of treatments based on user-specified dosage levels for each treatment.

The user sets dosage levels based on the goal of their experiment — perhaps this scientist wants to study the effects of four different drugs on cell growth. The probabilistic approach generates less biased data because it does not restrict the experiment to a predetermined subset of treatments.

The dosage levels are like probabilities, and each cell receives a random combination of treatments. If the user sets a high dosage, it is more likely most of the cells will take up that treatment. A smaller subset of cells will take up that treatment if the dosage is low.

“From there, the question is how do we design the dosages so that we can estimate the outcomes as accurately as possible? This is where our theory comes in,” Shyamal adds.

Their theoretical framework shows the best way to design these dosages so one can learn the most about the characteristic or trait they are studying.

After each round of the experiment, the user collects the results and feeds those back into the experimental framework. It will output the ideal dosage strategy for the next round, and so on, actively adapting the strategy over multiple rounds.

Optimizing dosages, minimizing error

The researchers proved their theoretical approach generates optimal dosages, even when the dosage levels are affected by a limited supply of treatments or when noise in the experimental outcomes varies at each round.

In simulations, this new approach had the lowest error rate when comparing estimated and actual outcomes of multiround experiments, outperforming two baseline methods.

In the future, the researchers want to enhance their experimental framework to consider interference between units and the fact that certain treatments can lead to selection bias. They would also like to apply this technique in a real experimental setting.

“This is a new approach to a very interesting problem that is hard to solve. Now, with this new framework in hand, we can think more about the best way to design experiments for many different applications,” Zhang says.

This research is funded, in part, by the Advanced Undergraduate Research Opportunities Program at MIT, Apple, the National Institutes of Health, the Office of Naval Research, the Department of Energy, the Eric and Wendy Schmidt Center at the Broad Institute, and a Simons Investigator Award.


Connect or reject: Extensive rewiring builds binocular vision in the brain

A first-of-its-kind study in mice shows neurons add and shed synapses at a frenzied pace during development to integrate visual signals from the two eyes.


Scientists have long known that the brain’s visual system isn’t fully hardwired from the start — it becomes refined by what babies see — but the authors of a new MIT study still weren’t prepared for the degree of rewiring they observed when they took a first-ever look at the process in mice as it happened in real-time.

As the researchers in The Picower Institute for Learning and Memory tracked hundreds of “spine” structures housing individual network connections, or “synapses,” on the dendrite branches of neurons in the visual cortex over 10 days, they saw that only 40 percent of the ones that started the process survived. Refining binocular vision (integrating input from both eyes) required numerous additions and removals of spines along the dendrites to establish an eventual set of connections.

Former graduate student Katya Tsimring led the study, published this month in Nature Communications, which the team says is the first in which scientists tracked the same connections all the way through the “critical period,” when binocular vision becomes refined.

“What Katya was able to do is to image the same dendrites on the same neurons repeatedly over 10 days in the same live mouse through a critical period of development, to ask, what happens to the synapses or spines on them?,” says senior author Mriganka Sur, the Paul and Lilah Newton Professor in the Picower Institute and MIT’s Department of Brain and Cognitive Sciences. “We were surprised by how much change there is.”

Extensive turnover

In the experiments, young mice watched as black-and-white gratings with lines of specific orientations and directions of movement drifted across their field of view. At the same time, the scientists observed both the structure and activity of the neurons’ main body (or “soma”) and of the spines along their dendrites. By tracking the structure of 793 dendritic spines on 14 neurons at roughly Day 1, Day 5 and Day 10 of the critical period, they could quantify the addition and loss of the spines, and therefore the synaptic connections they housed. And by tracking their activity at the same time, they could quantify the visual information the neurons received at each synaptic connection. For example, a spine might respond to one specific orientation or direction of grating, several orientations, or might not respond at all. Finally, by relating a spine’s structural changes across the critical period to its activity, they sought to uncover the process by which synaptic turnover refined binocular vision.

Structurally, the researchers saw that 32 percent of the spines evident on Day 1 were gone by Day 5, and that 24 percent of the spines apparent on Day 5 had been added since Day 1. The period between Day 5 and Day 10 showed similar turnover: 27 percent were eliminated, but 24 percent were added. Overall, only 40 percent of the spines seen on Day 1 were still there on Day 10.

Meanwhile, only four of the 13 neurons they were tracking that responded to visual stimuli still responded on Day 10. The scientists don’t know for sure why the other nine stopped responding, at least to the stimuli they once responded to, but it’s likely they now served a different function.

What are the rules?

Having beheld this extensive wiring and rewiring, the scientists then asked what entitled some spines to survive over the 10-day critical period.

Previous studies have shown that the first inputs to reach binocular visual cortex neurons are from the “contralateral” eye on the opposite side of the head (so in the left hemisphere, the right eye’s inputs get there first), Sur says. These inputs drive a neuron’s soma to respond to specific visual properties such as the orientation of a line — for instance, a 45-degree diagonal. By the time the critical period starts, inputs from the “ipsilateral” eye on the same side of the head begin joining the race to visual cortex neurons, enabling some to become binocular.

It’s no accident that many visual cortex neurons are tuned to lines of different directions in the field of view, Sur says.

“The world is made up of oriented line segments,” Sur notes. “They may be long line segments; they may be short line segments. But the world is not just amorphous globs with hazy boundaries. Objects in the world — trees, the ground, horizons, blades of grass, tables, chairs — are bounded by little line segments.”

Because the researchers were tracking activity at the spines, they could see how often they were active and what orientation triggered that activity. As the data accumulated, they saw that spines were more likely to endure if (a) they were more active, and (b) they responded to the same orientation as the one the soma preferred. Notably, spines that responded to both eyes were more active than spines that responded to just one, meaning binocular spines were more likely to survive than non-binocular ones.

“This observation provides compelling evidence for the ‘use it or lose it’ hypothesis,” says Tsimring. “The more active a spine was, the more likely it was to be retained during development.”

The researchers also noticed another trend. Across the 10 days, clusters emerged along the dendrites in which neighboring spines were increasingly likely to be active at the same time. Other studies have shown that by clustering together, spines are able to combine their activity to be greater than they would be in isolation.

By these rules, over the course of the critical period, neurons apparently refined their role in binocular vision by selectively retaining inputs that reinforced their budding orientation preferences, both via their volume of activity (a synaptic property called “Hebbian plasticity”) and their correlation with their neighbors (a property called “heterosynaptic plasticity”). To confirm that these rules were enough to produce the outcomes they were seeing under the microscope, they built a computer model of a neuron, and indeed the model recapitulated the same trends as what they saw in the mice.

“Both mechanisms are necessary during the critical period to drive the turnover of spines that are misaligned to the soma and to neighboring spine pairs,” the researchers wrote, “which ultimately leads to refinement of [binocular] responses such as orientation matching between the two eyes.”

In addition to Tsimring and Sur, the paper’s other authors are Kyle Jenks, Claudia Cusseddu, Greggory Heller, Jacque Pak Kan Ip, and Julijana Gjorgjieva. Funding sources for the research came from the National Institutes of Health, The Picower Institute for Learning and Memory, and the Freedom Together Foundation.


Professor Emeritus Daniel Kleppner, highly influential atomic physicist, dies at 92

The “godfather of Bose-Einstein condensation” and MIT faculty member for 37 years led research into atomic, molecular, and optical physics that led to GPS and quantum computing.


Daniel Kleppner, the Lester Wolfe Professor Emeritus of Physics at MIT whose work in experimental atomic physics made an immense mark on the field, died on June 16 at the age of 92, in Palo Alto, California.

Kleppner’s varied research examined the interactions of atoms with static electric and magnetic fields and radiation. His work included creating precision measurements with hydrogen masers, including the co-invention of the hydrogen maser atomic clock; his research into the physics of Rydberg atoms and cavity quantum electrodynamics; and his pioneering work in Bose-Einstein condensation (BEC).

Kleppner, who retired in 2003 after 37 years at MIT, was a highly literate and articulate scientist whose exacting research and communication skills helped set the direction of modern atomic, molecular, and optical (AMO) physics. From 1987 to 2000, he was associate director of the MIT Research Laboratory of Electronics (RLE), and served as interim director in 2001. He also co-founded the MIT-Harvard Center for Ultracold Atoms (CUA) in 2000, where he was co-director until 2006.

While he was never awarded a Nobel Prize, Kleppner's impact on the field of atomic physics and quantum optics, and his generous mentorship, enabled the Nobel achievements of many others. His patient and exacting pursuit of discovery led to basic research insights that led to major achievements. His extensive research into the tiny atom provided the fundamental knowledge necessary for the huge: the eventual development of groundbreaking technologies such as the global positioning system (GPS), magnetic resonance imaging (MRI), and quantum computing.

“He was a leader in the department, and a leader in the American Physical Society,” says Wolfgang Ketterle, the John D. MacArthur Professor of Physics at MIT and a 2001 Nobel laureate. “He was a statesman of science. He was this eloquent person, this master of words who could express things in memorable ways, and at the same time he has this sense of humility.”

“Dan Kleppner was a giant in the area of AMO physics, and in science more broadly,” says John Doyle PhD ’91, Harvard Quantum Initiative co-director and Kleppner advisee who helped Kleppner create the Bose-Einstein condensate from atomic hydrogen. “Perhaps his most impactful legacy is leading a culture of respect and supportive community actions that all scientists in the area of AMO physics enjoy today. Not only did his science lay the path for current research directions, his kindness, erudition, and commitment to community — and community service — are now ever-expanding waves that guide AMO physics. He was a mentor and friend to me."

Kleppner’s daughter Sofie Kleppner notes: “People who worked on early lasers never imagined we would be scanning groceries at the checkout counter. When they developed the hydrogen maser, they were a bunch of nerdy people who really wanted to understand Einstein’s theory of relativity. This was the basis for GPS, this is how our flights run on time. Our dad was convinced that basic research today could lead to all sorts of valuable things down the road.”

Early life and career

Born in Manhattan on Dec. 16, 1932, Kleppner was the son of Vienna native and advertising agency founder Otto Kleppner, who wrote the best-selling book “Advertising Procedure.” His mother, Beatrice (Taub) Kleppner, grew up in New Jersey and was a graduate of Barnard College. She helped with Otto’s manuscripts. Daniel Kleppner was the second of three siblings; his brother, the late Adam Kleppner, was a professor of mathematics at the University of Maryland, and his sister, Susan Folkman, was a research psychologist at the University of California at Berkeley.

“As a teenager, I just liked building things,” Kleppner once said. “And that turned out to be very useful when I went on to become an experimental physicist. I had a crystal radio, so I could listen to the radio over earphones. And the thought that the signals were just coming out of the atmosphere, I remember thinking: totally remarkable. And actually, I still do. In fact, the idea of the electromagnetic field, although it’s very well understood in physics, always seems like a miracle to me.”

In high school, he was inspired by his physics teacher, Arthur Hussey, who allowed Kleppner to work all hours in the labs. “There was one time when the whole school was having a pep rally, and I wasn’t that interested in cheering football, so I stayed up and worked in the lab, and the high school principal noticed that I was in there and called me in and gave me a dressing down for lack of school spirit.”

He didn’t care. Hussey talked with Kleppner about quantum mechanics, and “that sort of put a bee in my bonnet on that,” and taught him a little calculus. “In those years, physics was extremely fashionable. These were the post-war years, and physicists were considered heroes for having brought the war to conclusion with the atom bomb, and … the development of radar.”

He knew by then that he was “destined to spend a life in physics,” he said in a video interview for InfiniteMIT. “It was an easy era to become delighted by physics, and I was.”

Studying physics at Williams College, he was drawn to Albert Einstein’s theory of general relativity. He built a programmable machine that he called a forerunner of cybernetics. Williams also instilled in him a lifelong love of literature, and he almost became an English major. However, he didn’t appreciate what he called the school fraternities’ “playboy” and “anti-intellectual” atmosphere, and worked to graduate quickly within three years, in 1953.

He deferred his acceptance to Harvard University with a Fulbright Fellowship to Cambridge University, where he met the young physicist Kenneth Smith, whose research was with atomic beam resonance. Smith introduced him to the book “Nuclear Moments,” by Harvard professor Norman Ramsey, and presented a proposal by Ramsey’s advisor I.I. Rabi, who invented a technique that could make an atomic clock so precise “that you could see the effect of gravity on time that Einstein predicted,” said Kleppner.

“I found that utterly astonishing,” Kleppner noted. “The thought that gravity affects time: I had a hard time just visualizing that.”

When Kleppner wandered Harvard’s halls in 1955, he was excited to see a door with Ramsey’s name on it. He was interested in Ramsey’s research on molecular beam magnetic resonance, atomic clocks, and precision measurements. “Fortunately, I came along at a time when he had an opening in his research group,” Kleppner recalled.

A new atomic clock

As Kleppner’s advisor, Ramsey encouraged him to create a new type of atomic clock, believing that cesium and ammonia masers, a technology of amplified microwaves, were not precise enough to measure the effect of gravity on time.

Kleppner’s thesis was on using the concepts behind an ammonia maser to advance toward a hydrogen maser, which uses the natural microwave frequency of hydrogen atoms and amplifies it through stimulated emission of radiation. Kleppner discovered that coherent cesium atoms can bounce from properly prepared surfaces without losing their coherence.

After his 1959 PhD, Kleppner stayed on at Harvard, becoming an assistant professor in 1962.

Kleppner’s research on hydrogen led to a method to keep hydrogen atoms locked in a glass container for study over a longer period of time. The result, featuring hydrogen atoms bouncing within a microwave cavity, is used to stabilize the frequency of a clock to a precision better than one microsecond in a year.

In 1960, he and Ramsey successfully created a new atomic clock whose significant stability could confirm the minute effects of gravity on time, as predicted by Einstein’s theory of general relativity.

The current generation of optical clocks “are good enough to see the gravitational red shift for a few centimeters in height, so that’s quite extraordinary, and it’s had an extraordinary result,” said Kleppner. “We got to rethink just what we mean by time.”

While the hydrogen maser did verify Einstein’s conjecture about time and gravity, it took more than a decade before being widely used, at first by radio astronomers. Today, atomic clocks such as the hydrogen maser are used in applications requiring high short-term stability, such as the synchronization of ground-based timing systems that track global positioning satellites, for timekeeping and communication by naval observatories to maintain a precise and stable time reference known as UTC (USNO); very long-baseline microwave interferometry (VLBI) that enables astronomers to achieve very high resolution and study distant radio sources, including black holes; and, indirectly, in magnetic resonance imaging.

“When we first set out to make these atomic clocks, our goals were about the least practical you can think of,” Kleppner said in an interview with the MIT Physics Department. “From being a rather abstract idea that you’d like to somehow witness, it becomes a very urgent thing for the conduct of human affairs.”

Ramsey went on to win the Nobel Prize in Physics in 1989 for his work on the separated oscillatory fields method and its application in the hydrogen maser and atomic clocks.

MIT, ultracold gases, and BEC advancements

Kleppner figured he wouldn’t get tenure at Harvard, “because no matter how generous and good-spirited Norman was, he casts a long shadow, and it was good for me to be at just the right distance. When I came to MIT, I had a pallet of experiments that I wanted to pursue, and some ideas about teaching that I wanted to pursue, and the transition was very simple.”

Kleppner joined the Institute in 1966, and his Harvard PhD student (and current MIT professor post-tenure) David Pritchard followed him, to work on scattering experiments: Kleppner worked with pulsed lasers, and Pritchard with continuous-wave (CW) lasers.

“He was young, he was verbal, and he seemed to have new ideas about what to do,” says Pritchard. “We foresaw how important lasers would become. For a long time, it was just Dan and myself. That was actually the era in which lasers took over. Dan and I started off, we both got into lasers, and he did Rydberg atoms, and I did collisions and spectroscopy of weakly bound molecules and two-photon spectroscopy.”

Kleppner led the tiny MIT Atomic Physics Group to eventually become the US News and World Report’s No. 1 nationally ranked atomic physics group in 2012. “Dan was the leader on this,” recalled Pritchard. “To start from non-tenure and build it into the number-one ranked department in your subfield, that’s a lifetime achievement.”

The group became what Pritchard called “the supergroup” of laser developers that included Charles Townes, who won the Nobel for his work; Ali Javan, who established a major laser research center at MIT; and Dolly Shibles. Pritchard joined the faculty in 1970, and Ketterle joined in 1990 as his postdoc. “We were pioneers, and the result was of course that our total group had a bigger impact.”

“He’s not just the father figure of the field, he is my scientific father,” says Pritchard. “When I’m writing something and it’s not going very well, I would sort of think to myself, ‘What would Dan say? What would he advise you?”

With MIT low-temperature physicist Tom Greytak ’63, PhD ’67, Kleppner developed two revolutionary techniques — magnetic trapping and evaporative cooling. When the scientific community combined these techniques with laser cooling, atomic physics went into a major new direction.

In 1995, a group of researchers, led by Kleppner's former students Eric Cornell PhD ’90 and Carl Weiman ’73, made a BEC using rubidium atoms, and Ketterle succeeded with sodium atoms. For this achievement, they received the 2001 Nobel Prize in Physics. Kleppner called BEC “the most exciting advance in atomic physics for decades.” 

At a conference on BEC in 1996, Ketterle recalls Kleppner describing his own contributions: “'I feel like Moses, who showed his people the Holy Land, but he never reached it himself.' This was exactly what Dan did. He showed us the Holy Land of Bose-Einstein condensation. He showed us what is possible … He was the godfather of Bose-Einstein condensation.”

But he did reach the Holy Land. In 1998, when only a few groups had been able to create BECs, Kleppner and Greytak realized a hydrogen BEC. When he presented their work at the summer school in Varenna soon afterward, he received a long-lasting standing ovation — after 20 years of hard work, he had reached his goal.

“It is an irony that when Dan started this work, hydrogen was the only choice to reach the low temperatures for BEC,” says Ketterle. But in the end, it turned out that hydrogen has special properties that made it much harder to reach BEC than with other atoms. 

Rydberg atoms

In 1976, Kleppner pioneered the field of Rydberg atoms, a highly excited atom that shares the simple properties that characterize hydrogen. Kleppner showed that these states could be excited by a tunable laser and easily detected with field ionization. He then mapped out their response in high electric and magnetic fields, which he used to provide new physical insights into the connections between quantum mechanics and classical chaos.

In 1989, his research into atomic energy levels, under conditions where the corresponding classical motion is chaotic, mapped out the positions of thousands of quantum levels as a function of laser frequency and applied field using high-resolution laser spectroscopy. His observations gave new physical insight into the implications of classical chaos on quantum systems.

“I see Dan as being the inventor of Rydberg atoms,” says Dan’s former student William Phillips PhD ’76, physicist at the Institute of Standards and Technology (NIST). “Of course, Rydberg atoms is something that nature gives you, but Dan was the one who really understood this was something that you could use to do really new and wonderful things.”

Such atoms have proved to be useful for studying the transition between quantum mechanics and classical chaos. Kleppner’s 1976 paper on Rydberg atoms’ strong interactions, long lifetimes, and sensitivity to external fields has led to current scientific research and multimillion-dollar startups interested in developing the promising Rydberg quantum computer; highly accurate measurements of electric and magnetic fields; and in quantum optics experiments.

“Largely due to Dan’s seminal roadmap, Rydberg atoms have become atomic physics’ E. coli for investigating the interaction of radiation with matter,” wrote Ketterle in his nomination for Kleppner’s 2017 APS Medal for Exceptional Achievement in Research. “They are being used by others in quests for experimental systems to realize Schrödinger’s cat, as well as for making a quantum computer.”

In 1981, Kleppner suggested in a theoretical paper the possibility of suppressing spontaneous emission with a cavity: excited atoms cannot decay when the cavity lacks the oscillatory modes to receive their emissions. This was followed by his demonstration of this effect, and launched the field of cavity quantum electrodynamics (cQED), the study of how light confined within a reflective cavity interacts with atoms or other particles. This field has led to the creation of new lasers and photonic devices.

“This work fundamentally changed the way physicists regard the process of spontaneous emission by showing that it is not a fixed property of a quantum state, but can be modified and controlled,” said Ketterle. “Current applications of these principles, which Dan terms ‘wrecking the vacuum,’ include thresholdless lasers and the construction of photonic bandgap materials in which light propagation is forbidden at certain frequencies.”

MIT-Harvard Center for Ultracold Atoms

In 2000, Kleppner secured National Science Foundation funding to co-found the Center for Ultracold Atoms (CUA), an MIT-Harvard collaboration that linked RLE with the Harvard Department of Physics to explore the physics of ultracold atoms and quantum gases. Kleppner served as its first director until 2006, and was a member of a group that included MIT professors Ketterle, Pritchard, Vladan Vuletic, Martin W. Zwierlein, Paola Cappellaro PhD ’06, and Isaac Chuang ’90.

“Many centers disappear after 10 to 20 years; sometimes their mission is fulfilled,” says Ketterle, the CUA director from 2006 to 2023. “But given the excitement and the rapid evolution in atomic physics, the CUA is a super-active center brimming with excitement, and we just recently got renewed. That’s partially due to the efforts of Dan. He created the tradition of atomic physics at MIT. We are one of the best atomic physics groups in the world. And we are really a family.”

Boost-phase intercept report

Kleppner co-authored a highly influential 2003 report that examined the technical feasibility of boost-phase intercept, a concept central to President George H.W. Bush’s proposed controversial Strategic Defense Initiative (SDI), nicknamed "Star Wars,” which purportedly would render nuclear weapons obsolete. The focus of the APS Study on Boost-Phase Intercept for National Missile Defense, published as a special supplement to Reviews of Modern Physics, was on the physics and engineering challenges of intercepting a missile during its boost phase.

“This was a subject on which I had no technical background at all,” Kleppner recalled, so he expressed gratitude for the skills of co-chair Fred Lamb of the University of Illinois. “But the APS [American Physical Society] felt that it was important to have information for the public … and no one knew anything about it. It was the point in my life where I could do that. And I feel that you have an obligation when the need arises and you can do it, to do that.”

The result? “Technically, it really would not succeed, except in very limited circumstances,” Kleppner said. Added Pritchard, “It vastly changed the path of the nation.”

“He was the perfect person to chair the committee,” says Ketterle. “He excelled in being neutral and unbiased, and to create a no-nonsense report. I think the APS was very proud of this report. It shows how physicists analyze something which was at that moment of immense political and societal importance. This report helped to understand what laser weapons cannot do and what they can do. The fact that (SDI) eventually, slowly, disappeared, the report may have contributed to that.”

Dedicated educator

Kleppner trained generations of physicists, including as advisor to 23 PhD students who have gone on to attain positions in major universities and achieve major scientific awards.

He was awarded the Oersted Medal of the American Association of Physics Teachers in 1997, and earned the Institute’s prestigious 1995-1996 James R. Killian, Jr. Faculty Achievement Award for his service to MIT and society on behalf of atomic physics. “He has given generously of his time and effort to the formation of national science policy, and he has served the Institute with distinction as teacher, administrator and counselor,” the Killian committee wrote.

Kleppner and Ramsey wrote the widely used text “Quick Calculus” in 1972 — the third edition of the book was updated in 2022 edition with MIT Department of Physics’ Peter Dourmashkin. With Robert J. Kolenkow, Kleppner also wrote “An Introduction to Mechanics” in 1973, and its second edition in 2013. Physics department head Deepto Chakrabarty ’88 called it “a masterpiece:” “It has formed the foundation of our freshman 8.012 course for potential physics majors for over 50 years and has provided a deep, elegant, and mathematically sophisticated introduction to classical mechanics of physics majors across the U.S. It was my own introduction to serious physics as an MIT freshman in 1984.”

Recently, while Kleppner was being wheeled into surgery, one of the medical personnel noticed that his patient was the author of that book and blurted out, “Oh my God, I still am wondering about one of those problems that I found so difficult,” recalls his wife, Bea, laughing.

Kleppner called his method of teaching “an engagement with the students and with the subject.” He said that his role model for teaching was his wife, who taught psychology at Beaver Country Day High School. “Fortunately, at MIT, the students are so great. There’s nothing tough about teaching here, except trying to stay ahead of the students.”

He leaves a legacy of grateful physicists impacted by his generous teaching style.

“I’ve always felt that I’ve just been incredibly lucky to be part of Dan’s group,” says Phillips, who was at Princeton when his research into magnetic resonance caught Kleppner’s attention, and invited him to MIT. “Dan extended this idea to putting this hydrogen maser in a much higher magnetic field. Not that many people are trained by somebody like Dan Kleppner in the art of precision measurement.”

Kleppner also gifted Phillips an apparatus he built for his thesis, which shaved years off the laser cooling experiments that led to Phillips’ Nobel.

Ketterle credited Kleppner’s mentorship for his career at MIT. “He was an older, experienced person who believed in me. He had more trust in me than I had initially myself. I felt whenever I was at a crossroads, I could go to Dan and ask him for advice. When I gave him a paper to edit … there was red ink all over it, but he was absolutely right on almost everything.’”

In 2003, Kleppner was dismayed at the statistic that over 60 percent of middle and high school teachers teaching physics have no background in the subject. He started the CUA’s Teaching Opportunities in Physical Science summer program with his former postdoc Ted Ducas to train physics majors to prepare and teach physics material to middle and high school students. In its 14-year run, they worked with 112 students.

According to Ducas, one survey “indicates over 60 percent of our undergraduates have gone into, or plan to go into, pre-college teaching — a higher percentage than expected, because physics majors have so many other career opportunities often paying significantly more. The potential positive impact of that number of highly qualified and motivated teachers is dramatic.”

Kleppner also partnered with Japanese mathematician Heisuke Hironaka on the mentoring program Japanese Association for Mathematical Sciences (JAMS), which connected American college science students with their Japanese counterparts. “His interest in ensuring that future generations also see the value of international communities was reflected in JAMS,” says Sofie Kleppner.

Recognitions and public service

Kleppner was promoted to professor in 1974 and headed the physics department’s Division of Atomic, Plasma and Condensed Matter Physics from 1976 to 1979. He was named the Lester Wolfe Professor of Physics in 1985.

Active in the interface between physics and public policy, he served on more than 30 committees. For the APS, he was on the Panel on Public Affairs (POPA), chaired the Physics Planning Committee and the Division of Atomic, Molecular and Optical Physics, and contributed to a study on the growth and mentorship of young physics professors. He chaired a report for the National Academy of Sciences on atomic physics that he presented on various congressional committees, served on the National Research Council's Physics Survey Committee, and was chair of the International Union of Pure and Applied Physics’ Commission on Atomic and Molecular Physics. At MIT, he was also an ombuds of the Physics Department.

Kleppner was a fellow of the American Academy of Arts and Sciences, American Association for the Advancement of Science, OSA (now Optica), French Academy of Sciences, and the American Philosophical Society; a member of the National Academy of Sciences; and a Phi Beta Kappa lecturer.

His interest in literature at Williams bloomed into a secondary career as a writer, including decades of writing witty and insightful, yet accessible, pieces for Physics Today, including his “Reference Frame” columns on physics history and policy.

Kleppner was a recipient of many awards, including the prestigious Wolf Prize in 2005 “for groundbreaking work in atomic physics of hydrogenic systems, including research on the hydrogen maser, Rydberg atoms, and Bose-Einstein condensation.” Other accolades include a 2014 Benjamin Franklin Medal and a 2006 National Medal of Science, presented by U.S. President George W. Bush. He also received the Frederic Ives Medal (2007), the William F. Meggers Award (1991), the Lilienfeld Prize (1991), and the Davisson-Germer Prize (1986).

His articles, congressional testimony, and advocating on behalf of physicists around the world at one point inspired his Physics Planning Committee colleagues to present him with a Little League trophy of a golden baseball player, with the inscription “Dan Kleppner — Who Went to Bat for Atomic Physics.”

Kleppner said that he was inspired by his mentor, Ramsey, to get involved in the scientific community. “It’s a privilege to be a scientist in this country,” said Kleppner. “And I think that one has some obligation to pay for the privilege, when you can.”

He wrote, “Any scenario for a decent future of our nation and the world must include a reasonable component of science that is devoted to the search for new knowledge. We cannot afford to abandon this vision under a barrage of criticism, no matter how eloquent or powerful the critics.”

Family and retired life

Kleppner met his future wife, Beatrice Spencer, in 1954 on the USS United States, when both were England-bound and in their second year of studying at Cambridge. They began as friends, and eventually married in 1958, in Ipswich, Massachusetts. They raised their three children, Sofie, Paul, and Andrew, at their home in Belmont, Massachusetts, and their vacation home in Vermont.

Kleppner’s family described him as an optimist who didn’t believe in lying, worrying, or unethical behavior. He and Bea generously invited into their home anyone in need. “When we were growing up, we had the international community in our house,” recalls Sofie. “He was just a tremendously generous person. At my father’s 80th birthday celebration at MIT, there were three hours of five-minute reminiscences. It was really moving to hear the number of people who felt that just having the open door at my parents’ house meant the difference to them as they went through difficult times.”

In his retirement, Kleppner continued with his woodworking projects, including building beds, lamps, cabinets, a beautiful spiral staircase, a cradle curved like the hull of a boat, and bookcases featuring super ellipses, a closed curve that blends elements of an ellipse and a rectangle.

“I enjoy designing,” he said in one video. “It’s the same instinct for making things work in experimental physics. It’s lovely to make a piece of apparatus that starts functioning, and even if the experiment doesn’t do what you want it to do. There’s always a lot of jubilation when the apparatus is first turned on and first works.”

His last article for Physics Today was in 2020. In his later years, he kept in touch with his colleagues, swapping book ideas with Ketterle’s wife, Michele Plott, and, since the Covid-19 pandemic, maintained regular Zoom meetings with a group of his former students, hosted by Mike Kash; and another, what they called “The Famous Physicists,” that included Phillips and  their Brazilian colleague Vanderlei Bagnato.

“In recent years, I would still go to Dan for advice about difficult questions,” says Phillips, “sometimes about physics, sometimes just about life and public policy, because maybe I always felt that if there was anything you wanted done in which physics or science was part of the question that Dan would be the best person to do it.”

His family says that Kleppner suddenly fell ill at a Father’s Day dinner. According to his wife, his last words before being rushed to the hospital were a toast to his grandson, who recently graduated high school: “To Darwin and all youth who have new and exciting ideas.”

Says Bea, “​​He always said that you have to be optimistic to be a scientist, because you have to be patient. Things don’t work out and they’re fiddly, and there are lots of things that go wrong. His last words were ones that make you feel there’s hope for the future.”


Five MIT faculty elected to the National Academy of Sciences for 2025

Rodney Brooks, Parag Pathak, Scott Sheffield, Benjamin Weiss, Yukiko Yamashita, and 13 MIT alumni are recognized by their peers for their outstanding contributions to research.


The National Academy of Sciences (NAS) has elected 120 members and 30 international members, including five MIT faculty members and 13 MIT alumni. Professors Rodney Brooks, Parag Pathak, Scott Sheffield, Benjamin Weiss, and Yukiko Yamashita were elected in recognition of their “distinguished and continuing achievements in original research.” Membership to the National Academy of Sciences is one of the highest honors a scientist can receive in their career.

Elected MIT alumni include: David Altshuler ’86, Rafael Camerini-Otero ’66, Kathleen Collins PhD ’92, George Daley PhD ’89, Scott Doney PhD ’91, John Doyle PhD ’91, Jonathan Ellman ’84, Shanhui Fan PhD ’97, Julia Greer ’97, Greg Lemke ’78, Stanley Perlman PhD ’72, David Reichman PhD ’97, and Risa Wechsler ’96. 

Those elected this year bring the total number of active members to 2,662, with 556 international members. The NAS is a private, nonprofit institution that was established under a congressional charter signed by President Abraham Lincoln in 1863. It recognizes achievement in science by election to membership, and — with the National Academy of Engineering and the National Academy of Medicine — provides science, engineering, and health policy advice to the federal government and other organizations.

Rodney Brooks

Rodney A. Brooks is the Panasonic Professor of Robotics Emeritus at MIT and the chief technical officer and co-founder of Robust AI. Previously, he was founder, chair, and CTO of Rethink Robotics and founder and CTO of iRobot Corp. He is also the former director of the MIT Artificial Intelligence Laboratory and the MIT Computer Science and Artificial Intelligence Laboratory. Brooks received degrees in pure mathematics from the Flinders University of South Australia and a PhD in computer science from Stanford University in 1981. He held research positions at Carnegie Mellon University and MIT, and a faculty position at Stanford before joining the faculty of MIT in 1984.

Brooks’ research is concerned with both the engineering of intelligent robots to operate in unstructured environments, and with understanding human intelligence through building humanoid robots. He has published papers and books in model-based computer vision, path planning, uncertainty analysis, robot assembly, active vision, autonomous robots, micro-robots, micro-actuators, planetary exploration, representation, artificial life, humanoid robots, and compiler design.

Brooks is a member of the National Academy of Engineering, a founding fellow of the Association for the Advancement of Artificial Intelligence, a fellow of the American Academy of Arts and Sciences, the American Association for the Advancement of Science, the Association for Computing Machinery, a foreign fellow of The Australian Academy of Technological Sciences and Engineering, and a corresponding member of the Australian Academy of Science. He won the Computers and Thought Award at the 1991 International Joint Conference on Artificial Intelligence, and the IEEE Founders Medal in 2023.

Parag Pathak

Parag Pathak is the Class of 1922 Professor of Economics and a founder and director of MIT’s Blueprint Labs. He joined the MIT faculty in 2008 after completing his PhD in business economics and his master’s and bachelor’s degrees in applied mathematics, all at Harvard University.

Pathak is best known for his work on market design and education. His research has informed student placement and school choice mechanisms across the United States, including in Boston, New York City, Chicago, and Washington, and his recent work applies ideas from market design to the rationing of vital medical resources. Pathak has also authored leading studies on school quality, charter schools, and affirmative action. In urban economics, he has measured the effects of foreclosures on house prices and how the housing market reacted to the end of rent control in Cambridge, Massachusetts.

Pathak’s research on market design was recognized with the 2018 John Bates Clark Medal, given by the American Economic Association to the economist under 40 whose work is judged to have made the most significant contribution to the field. He is a fellow of the American Academy of Arts and Sciences, the Econometric Society, and the Society for the Advancement of Economic Theory. Pathak is also the founding co-director of the market design working group at the National Bureau of Economic Research, and a co-founder of Avela Education.

Scott Sheffield

Scott Sheffield, Leighton Family Professor of Mathematics, joined the MIT faculty in 2008 after a faculty appointment at the Courant Institute at New York University. He received a PhD in mathematics from Stanford University in 2003 under the supervision of Amir Dembo, and completed BA and MA degrees in mathematics from Harvard University in 1998.

Sheffield is a probability theorist, working on geometrical questions that arise in such areas as statistical physics, game theory, and metric spaces, as well as long-standing problems in percolation theory and the theory of random surfaces.

In 2017, Sheffield received the Clay Research Award with Jason Miller, “in recognition of their groundbreaking and conceptually novel work on the geometry of Gaussian free field and its application to the solution of open problems in the theory of two-dimensional random structures.” In 2023, he received the Leonard Eisenbud Prize with Jason Miller “for works on random two-dimensional geometries, and in particular on Liouville Quantum Gravity.” Later in 2023, Sheffield received the Frontiers of Science Award with Jason Miller for the paper “Liouville quantum gravity and the Brownian map I: the QLE(8/3,0) metric.” Sheffield is a fellow of the American Academy of Arts and Science.

Benjamin Weiss

Benjamin Weiss is the Robert R. Schrock Professor of Earth and Planetary Sciences. He studied physics at Amherst College as an undergraduate and went on to study planetary science and geology at Caltech, where he earned a master’s degree in 2001 and PhD in 2003. Weiss’ doctoral dissertation on Martian meteorite ALH 84001 revealed records of the ancient Martian climate and magnetic field, and provided evidence some meteorites could transfer materials from Mars to Earth without heat-sterilization. Weiss became a member of the Department of Earth, Atmospheric and Planetary Sciences faculty in 2004 and is currently chair of the Program in Planetary Science.

A specialist in magnetometry, Weiss seeks to understand the formation and evolution of the Earth, terrestrial planets, and small solar system bodies through laboratory analysis, spacecraft observations, and fieldwork. He is known for key insights into the history of our solar system, including discoveries about the early nebular magnetic field, the moon’s long-lived core dynamo, and asteroids that generated core dynamos in the past. In addition to leadership roles on current, active NASA missions — as deputy principal investigator for Psyche, and co-investigator for Mars Perseverance and Europa Clipper — Weiss has also been part of science teams for the SpaceIL Beresheet, JAXA Hayabusa 2, and ESA Rosetta spacecraft.

As principal investigator of the MIT Planetary Magnetism Laboratory, Weiss works to develop high-sensitivity, high-resolution techniques in magnetic microscopy to image the magnetic fields embedded in rock samples collected from meteorites, the lunar surface, and sites around the Earth. Studying these magnetic signatures can help answer questions about the conditions of the early solar system, past climates on Earth and Mars, and factors that promote habitability.

Yukiko Yamashita

Yukiko Yamashita is a professor of biology at MIT, a core member of the Whitehead Institute for Biomedical Research, and an investigator at the Howard Hughes Medical Institute (HHMI). Yamashita earned her BS in biology in 1994 and her PhD in biophysics in 1999 from Kyoto University. From 2001 to 2006, she did postdoctoral research at Stanford University. She was appointed to the University of Michigan faculty in 2007 and was named an HHMI Investigator in 2014. She became a member of the Whitehead Institute and a professor of biology at MIT in 2020.

Yukiko Yamashita studies two fundamental aspects of multicellular organisms: how cell fates are diversified via asymmetric cell division, and how genetic information is transmitted through generations via the germline.

Two remarkable feats of multicellular organisms are generation of many distinct cell types via asymmetric cell division and transmission of the germline genome to the next generation, essentially in eternity. Studying these processes using the Drosophila male germline as a model system has led us to venture into new areas of study, such as functions of satellite DNA, “genomic junk,” and how they might be involved in speciation.

Yamashita is a member of the American Academy of Arts and Sciences, a fellow of the American Society for Cell Biology, and the winner of the Tsuneko and Reiji Okazaki Award in 2016. She was named a MacArthur Fellow in 2011.


Scientists discover compounds that help cells fight a wide range of viruses

The molecules trigger a built-in cellular stress response and show promise as broad-spectrum antivirals against Zika, herpes, RSV, and more.


Researchers at MIT and other institutions have identified compounds that can fight off viral infection by activating a defense pathway inside host cells. These compounds, they believe, could be used as antiviral drugs that work against not just one but any kind of virus.

The researchers identified these compounds, which activate a host cell defense system known as the integrated stress response pathway, in a screen of nearly 400,000 molecules. In tests in human cells, the researchers showed that the compounds help cells fend off infection from RSV, herpes virus, and Zika virus. They also proved effective in combating herpes infection in a mouse model.

The research team now plans to test the compounds against additional viruses, in hopes of developing them for eventual clinical trials.

“We’re very excited about this work, which allows us to harness the stress response of the host cells to arrive at a means to identify and develop broad-spectrum antivirals,” says James Collins, the Termeer Professor of Medical Engineering and Science in MIT’s Institute for Medical Engineering and Science (IMES) and Department of Biological Engineering.

Collins and Maxwell Wilson, an associate professor of molecular biology at the University of California, Santa Barbara and chief scientific officer of Integrated Biosciences, are the senior authors of the new study, which appears in Cell. Felix Wong, a former MIT postdoc and chief executive officer of Integrated Biosciences, is the lead author of the paper. In addition to MIT, UCSB, and Integrated Biosciences, the research team also includes scientists from Illumina Ventures and Princeton University.

Boosting cell defense

In human cells, the integrated stress response pathway is turned on in response to viral infection as well as other types of stress such as starvation. During viral infection, the pathway is triggered by double-stranded RNA, a molecule produced during the replication cycle of viruses. When that RNA is detected, the cell shuts down protein synthesis, which blocks the virus from producing the proteins it needs to replicate.

Compounds that boost this pathway, the researchers believe, could be good candidates for new antiviral drugs that could combat any type of virus.

“Typically, how antivirals are developed is that you develop one antiviral for one specific virus,” Wong says. “In this case, we hypothesized that being able to modulate the host cell stress response might give us a new class of broad-spectrum antivirals — compounds that directly act on the host cells to alter something fundamental about how all viruses replicate.”

To help them identify compounds that would enhance the activity of this pathway during viral infection, the researchers invented a novel optogenetic screen. Optogenetics is a bioengineering technique that allows researchers to insert light-sensitive proteins into the genome of a cell. In this case, the researchers engineered modifications to a protein called PKR, which turns on the stress pathway, so that they could turn it on with light.

Using this technique, the researchers screened a library of nearly 400,000 commercially available and proprietary chemical compounds. Each of these compounds was applied to human cells as the cells were also exposed to blue light, which simulated viral infection by activating PKR.

By measuring the cells’ survival rates, the researchers could determine which compounds boosted activation of the pathway and amplified the cells’ ability to shut down viral reproduction. This screen yielded about 3,500 compounds with potential antiviral activity, which were evaluated further.

“If the pathway were turned on in response to viral infection, what our compounds do is they turn it on full blast,” Wong says. “Even in the presence of a small amount of virus, if the pathway is triggered, then the antiviral response is also maximized.”

Fighting infection

The researchers then selected eight of the most promising compounds and screened them for their ability to kill viruses while avoiding harmful effects in human cells. Based on these tests, the researchers chose three top candidates, which they called IBX-200, IBX-202, and IBX-204.

In cells that were infected with either Zika virus, herpes virus, or RSV, treatment with these compounds significantly reduced the amount of virus in the cells. The researchers then tested one of the compounds, IBX-200, in mice infected with herpes virus, and found that it was able to reduce the viral load and improve symptoms.

Experiments showed that these compounds appear to turn on an enzyme that is involved in detecting stress. This activates the stress response pathway and primes the cells to become more responsive to viral infection. When applied to cells that are not already infected, the compounds have no effect.

The researchers now plan to evaluate their lead candidates against a broader range of viruses. They also aim to identify additional compounds that activate the integrated stress response, as well as other cellular stress pathways with the potential to clear viral or bacterial infections.

The research was funded by the Defense Threat Reduction Agency, the National Science Foundation, the U.S. Army Research Office, and Integrated Biosciences.


Simulation-based pipeline tailors training data for dexterous robots

The PhysicsGen system, developed by MIT researchers, helps robots handle items in homes and factories by tailoring training data to a particular machine.


When ChatGPT or Gemini give what seems to be an expert response to your burning questions, you may not realize how much information it relies on to give that reply. Like other popular generative artificial intelligence (AI) models, these chatbots rely on backbone systems called foundation models that train on billions, or even trillions, of data points.

In a similar vein, engineers are hoping to build foundation models that train a range of robots on new skills like picking up, moving, and putting down objects in places like homes and factories. The problem is that it’s difficult to collect and transfer instructional data across robotic systems. You could teach your system by teleoperating the hardware step-by-step using technology like virtual reality (VR), but that can be time-consuming. Training on videos from the internet is less instructive, since the clips don’t provide a step-by-step, specialized task walk-through for particular robots.

A simulation-driven approach called “PhysicsGen” from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Robotics and AI Institute customizes robot training data to help robots find the most efficient movements for a task. The system can multiply a few dozen VR demonstrations into nearly 3,000 simulations per machine. These high-quality instructions are then mapped to the precise configurations of mechanical companions like robotic arms and hands. 

PhysicsGen creates data that generalize to specific robots and condition via a three-step process. First, a VR headset tracks how humans manipulate objects like blocks using their hands. These interactions are mapped in a 3D physics simulator at the same time, visualizing the key points of our hands as small spheres that mirror our gestures. For example, if you flipped a toy over, you’d see 3D shapes representing different parts of your hands rotating a virtual version of that object.

The pipeline then remaps these points to a 3D model of the setup of a specific machine (like a robotic arm), moving them to the precise “joints” where a system twists and turns. Finally, PhysicsGen uses trajectory optimization — essentially simulating the most efficient motions to complete a task — so the robot knows the best ways to do things like repositioning a box.

Each simulation is a detailed training data point that walks a robot through potential ways to handle objects. When implemented into a policy (or the action plan that the robot follows), the machine has a variety of ways to approach a task, and can try out different motions if one doesn’t work.

“We’re creating robot-specific data without needing humans to re-record specialized demonstrations for each machine,” says Lujie Yang, an MIT PhD student in electrical engineering and computer science and CSAIL affiliate who is the lead author of a new paper introducing the project. “We’re scaling up the data in an autonomous and efficient way, making task instructions useful to a wider range of machines.”

Generating so many instructional trajectories for robots could eventually help engineers build a massive dataset to guide machines like robotic arms and dexterous hands. For example, the pipeline might help two robotic arms collaborate on picking up warehouse items and placing them in the right boxes for deliveries. The system may also guide two robots to work together in a household on tasks like putting away cups.

PhysicsGen’s potential also extends to converting data designed for older robots or different environments into useful instructions for new machines. “Despite being collected for a specific type of robot, we can revive these prior datasets to make them more generally useful,” adds Yang.

Addition by multiplication

PhysicsGen turned just 24 human demonstrations into thousands of simulated ones, helping both digital and real-world robots reorient objects.

Yang and her colleagues first tested their pipeline in a virtual experiment where a floating robotic hand needed to rotate a block into a target position. The digital robot executed the task at a rate of 81 percent accuracy by training on PhysicGen’s massive dataset, a 60 percent improvement from a baseline that only learned from human demonstrations.

The researchers also found that PhysicsGen could improve how virtual robotic arms collaborate to manipulate objects. Their system created extra training data that helped two pairs of robots successfully accomplish tasks as much as 30 percent more often than a purely human-taught baseline.

In an experiment with a pair of real-world robotic arms, the researchers observed similar improvements as the machines teamed up to flip a large box into its designated position. When the robots deviated from the intended trajectory or mishandled the object, they were able to recover mid-task by referencing alternative trajectories from their library of instructional data.

Senior author Russ Tedrake, who is the Toyota Professor of Electrical Engineering and Computer Science, Aeronautics and Astronautics, and Mechanical Engineering at MIT, adds that this imitation-guided data generation technique combines the strengths of human demonstration with the power of robot motion planning algorithms.

“Even a single demonstration from a human can make the motion planning problem much easier,” says Tedrake, who is also a senior vice president of large behavior models at the Toyota Research Institute and CSAIL principal investigator. “In the future, perhaps the foundation models will be able to provide this information, and this type of data generation technique will provide a type of post-training recipe for that model.”

The future of PhysicsGen

Soon, PhysicsGen may be extended to a new frontier: diversifying the tasks a machine can execute.

“We’d like to use PhysicsGen to teach a robot to pour water when it’s only been trained to put away dishes, for example,” says Yang. “Our pipeline doesn’t just generate dynamically feasible motions for familiar tasks; it also has the potential of creating a diverse library of physical interactions that we believe can serve as building blocks for accomplishing entirely new tasks a human hasn’t demonstrated.”

Creating lots of widely applicable training data may eventually help build a foundation model for robots, though MIT researchers caution that this is a somewhat distant goal. The CSAIL-led team is investigating how PhysicsGen can harness vast, unstructured resources — like internet videos — as seeds for simulation. The goal: transform everyday visual content into rich, robot-ready data that could teach machines to perform tasks no one explicitly showed them.

Yang and her colleagues also aim to make PhysicsGen even more useful for robots with diverse shapes and configurations in the future. To make that happen, they plan to leverage datasets with demonstrations of real robots, capturing how robotic joints move instead of human ones.

The researchers also plan to incorporate reinforcement learning, where an AI system learns by trial and error, to make PhysicsGen expand its dataset beyond human-provided examples. They may augment their pipeline with advanced perception techniques to help a robot perceive and interpret their environment visually, allowing the machine to analyze and adapt to the complexities of the physical world.

For now, PhysicsGen shows how AI can help us teach different robots to manipulate objects within the same category, particularly rigid ones. The pipeline may soon help robots find the best ways to handle soft items (like fruits) and deformable ones (like clay), but those interactions aren’t easy to simulate yet.

Yang and Tedrake wrote the paper with two CSAIL colleagues: co-lead author and MIT PhD student Hyung Ju “Terry” Suh SM ’22 and MIT PhD student Bernhard Paus Græsdal. Robotics and AI Institute researchers Tong Zhao ’22, MEng ’23, Tarik Kelestemur, Jiuguang Wang, and Tao Pang PhD ’23 are also authors. Their work was supported by the Robotics and AI Institute and Amazon.

The researchers recently presented their work at the Robotics: Science and Systems conference.


New AI system uncovers hidden cell subtypes, boosts precision medicine

CellLENS reveals hidden patterns in cell behavior within tissues, offering deeper insights into cell heterogeneity — vital for advancing cancer immunotherapy.


In order to produce effective targeted therapies for cancer, scientists need to isolate the genetic and phenotypic characteristics of cancer cells, both within and across different tumors, because those differences impact how tumors respond to treatment.

Part of this work requires a deep understanding of the RNA or protein molecules each cancer cell expresses, where it is located in the tumor, and what it looks like under a microscope.

Traditionally, scientists have looked at one or more of these aspects separately, but now a new deep learning AI tool, CellLENS (Cell Local Environment and Neighborhood Scan), fuses all three domains together, using a combination of convolutional neural networks and graph neural networks to build a comprehensive digital profile for every single cell. This allows the system to group cells with similar biology — effectively separating even those that appear very similar in isolation, but behave differently depending on their surroundings.

The study, published recently in Nature Immunology, details the results of a collaboration between researchers from MIT, Harvard Medical School, Yale University, Stanford University, and University of Pennsylvania — an effort led by Bokai Zhu, an MIT postdoc and member of the Broad Institute of MIT and Harvard and the Ragon Institute of MGH, MIT, and Harvard.

Zhu explains the impact of this new tool: “Initially we would say, oh, I found a cell. This is called a T cell. Using the same dataset, by applying CellLENS, now I can say this is a T cell, and it is currently attacking a specific tumor boundary in a patient.

“I can use existing information to better define what a cell is, what is the subpopulation of that cell, what that cell is doing, and what is the potential functional readout of that cell. This method may be used to identify a new biomarker, which provides specific and detailed information about diseased cells, allowing for more targeted therapy development.”

This is a critical advance because current methodologies often miss critical molecular or contextual information — for example, immunotherapies may target cells that only exist at the boundary of a tumor, limiting efficacy. By using deep learning, the researchers can detect many different layers of information with CellLENS, including morphology and where the cell is spatially in a tissue.

When applied to samples from healthy tissue and several types of cancer, including lymphoma and liver cancer, CellLENS uncovered rare immune cell subtypes and revealed how their activity and location relate to disease processes — such as tumor infiltration or immune suppression.

These discoveries could help scientists better understand how the immune system interacts with tumors and pave the way for more precise cancer diagnostics and immunotherapies.

“I’m extremely excited by the potential of new AI tools, like CellLENS, to help us more holistically understand aberrant cellular behaviors within tissues,” says co-author Alex K. Shalek, the director of the Institute for Medical Engineering and Science (IMES), the J. W. Kieckhefer Professor in IMES and Chemistry, and an extramural member of the Koch Institute for Integrative Cancer Research at MIT, as well as an Institute member of the Broad Institute and a member of the Ragon Institute. “We can now measure a tremendous amount of information about individual cells and their tissue contexts with cutting-edge, multi-omic assays. Effectively leveraging that data to nominate new therapeutic leads is a critical step in developing improved interventions. When coupled with the right input data and careful downsteam validations, such tools promise to accelerate our ability to positively impact human health and wellness.”


Study shows a link between obesity and what’s on local restaurant menus

MIT researchers analyzed the nutritional content of millions of menu items across Boston, London, and Dubai.


For many years, health experts have been concerned about “food deserts,” places where residents lack good nutritional options. Now, an MIT-led study of three major global cities uses a new, granular method to examine the issue, and concludes that having fewer and less nutritional eating options nearby correlates with obesity and other health outcomes.

Rather than just mapping geographic areas, the researchers examined the dietary value of millions of food items on roughly 30,000 restaurant menus and derived a more precise assessment of the connection between neighborhoods and nutrition.

“We show that what is sold in a restaurant has a direct correlation to people’s health,” says MIT researcher Fabio Duarte, co-author of a newly published paper outlining the study’s results. “The food landscape matters.”

The open-access paper, “Data-driven nutritional assessment of urban food landscapes: insights from Boston, London, Dubai,” was published this week in Nature: Scientific Reports.

The co-authors are Michael Tufano, a PhD student at Wageningen University, in the Netherlands; Duarte, associate director of MIT’s Senseable City Lab, which uses data to study cities as dynamic systems; Martina Mazzarello, a postdoc at the Senseable City Lab; Javad Eshtiyagh, a research fellow at the Senseable City Lab; Carlo Ratti, professor of the practice and director of the Senseable City Lab; and Guido Camps, a senior researcher at Wageningen University.

Scanning the menu

To conduct the study, the researchers examined menus from Boston, Dubai, and London, in the summer of 2023, compiling a database of millions of items available through popular food-delivery platforms. The team then evaluated the food items as rated by the USDA’s FoodData Central database, an information bank with 375,000 kinds of food products listed. The study deployed two main metrics, the Meal Balance Index, and the Nutrient-Rich Foods Index.

The researchers examined about 222,000 menu items from over 2,000 restaurants in Boston, about 1.6 million menu items from roughly 9,000 restaurants in Dubai, and about 3.1 million menu items from about 18,000 restaurants in London. In Boston, about 71 percent of the items were in the USDA database; in Dubai and London, that figure was 42 percent and 56 percent, respectively.

The team then rated the nutritional value of the items appearing on menus, and correlated the food data with health-outcome data from Boston and London. In London, they found a clear correlation between neighborhood menu offerings and obesity, or the lack thereof; with a slightly less firm correlation in Boston. Areas with food options that include a lot of dietary fibers, sometimes along with fruits and vegetables, tend to have better health data.

In Dubai, the researchers did not have the same types of health data available but did observe a strong correlation between rental prices and the nutritional value of neighborhood-level food, suggesting that wealthier residents have better nourishment options.

“At the item level, when we have less nutritional food, we see more cases of obsesity,” Tufano says. “It’s true that not only do we have more fast food in poor neighborhoods, but the nutritional value is not the same.”

Re-mapping the food landscape

By conducting the study in this fashion, the scholars added a layer of analysis to past studies of food deserts. While past work has broken ground by identifying neighborhoods and areas lacking good food access, this research makes a more comprehensive assessment of what people consume. The research moves toward evaluating the complex mix of food available in any given area, which can be true even of areas with more limited options.

“We were not satisfied with this idea that if you only have fast food, it’s a food desert, but if you have a Whole Foods, it’s not,” Duarte says. “It’s not necessarily like that.”

For the Senseable City Lab researchers, the study is a new technique further enabling them to understand city dynamics and the effects of the urban environment on health. Past lab studies have often focused on issues such as urban mobility, while extending to matters such as mobility and air pollution, among other topics.

Being able to study food and health at the neighborhood level, though, is still another example of the ways that data-rich spheres of life can be studied in close detail.

“When we started working on cities and data, the data resolution was so low,” Ratti says. “Today the amount of data is so immense we see this great opportunity to look at cities and see the influence of the urban environment as a big determinant of health. We see this as one of the new frontiers of our lab. It’s amazing how we can now look at this very precisely in cities.”


Gift from Dick Larson establishes Distinguished Professorship in Data, Systems, and Society

Sasha Rakhlin, a professor in IDSS and brain and cognitive sciences, has been named the inaugural holder of the new professorship.


The MIT Institute for Data, Systems, and Society (IDSS) announced the creation of a new endowed chair made possible by the generosity of IDSS professor post-tenure and “MIT lifer” Richard “Dick” Larson. Effective July 1, the fund provides a full professorship for senior IDSS faculty: the Distinguished Professorship in Data, Systems, and Society.

“As a faculty member, MIT has not only accepted but embraced my several mid-career changes of direction,” says Larson. “I have called five different academic departments my home, starting with Electrical Engineering (that is what it was called in the 1960s) and now finalized with the interdepartmental, interdisciplinary IDSS — Institute for Data, Systems and Society. Those beautiful three words  — data, systems, society — they represent my energy and commitment over the second half of my career. My gifted chair is an effort to keep alive those three words, with others following me doing research, teaching and mentoring centered around data, systems, society.”

Larson’s career has focused his operations research and systems expertise on a wide variety of problems, in both public and private sectors. His contributions span the fields of urban service systems (especially emergency response systems), disaster planning, pandemics, queueing, logistics, technology-enabled education, smart-energy houses, and workforce planning. His latest book, “Model Thinking for Everyday Life,” draws on decades of experience as a champion of STEM education at MIT and beyond, such as his leadership of MIT BLOSSOMS.

“Dick Larson has been making an impact at MIT for over half a century,” says IDSS Director Fotini Christia, the Ford International Professor in Political Science. “This gift extends his already considerable legacy and ensures his impact will continue to be felt for many years to come.”

Christia is pleased that IDSS and brain and cognitive science professor Alexander “Sasha” Rakhlin is the inaugural holder of the new professorship. The selection recognizes Rakhlin’s distinguished scholarly record, dedicated service to IDSS, excellence in teaching, and contributions to research in statistics and computation.

“Sasha’s analysis of neural network complexity, and his work developing tools for online prediction, are perfect examples of research which builds bridges across disciplines, and also connects different departments and units at MIT,” says Michale Fee, the Glen V. and Phyllis F. Dorflinger Professor of Neuroscience, and head of the Department of Brain and Cognitive Sciences. “It’s wonderful to see Sasha’s contributions recognized in this way, and I’m grateful to Dick Larson for supporting this vision.”

Rakhlin’s research is in machine learning, with an emphasis on statistics and computation. He is interested in formalizing the process of learning, in analyzing learning models, and in deriving and implementing emerging learning methods. A significant thrust of his research is in developing theoretical and algorithmic tools for online prediction, a learning framework where data arrives in a sequential fashion.

“I am honored to be the inaugural holder of the Distinguished Professorship in Data, Systems, and Society,” says Rakhlin. “Professor Larson’s commitment to education and service to MIT both serve as models to follow.”


Walk-through screening system enhances security at airports nationwide

Lincoln Laboratory's 3D microwave imaging technology for detecting concealed threats was integrated into HEXWAVE, commercially developed by Liberty Defense.


A new security screener that people can simply walk past may soon be coming to an airport near you. Last year, U.S. airports nationwide began adopting HEXWAVE — a commercialized walkthrough security screening system based on microwave imaging technology developed at MIT Lincoln Laboratory — to satisfy a new Transportation Security Administration (TSA) mandate for enhanced employee screening to detect metallic and nonmetallic threats. The TSA is now in the process of evaluating HEXWAVE as a potential replacement of metal detectors to screen PreCheck passengers.

Typically, when you arrive at an airport security checkpoint line, you place your carry-on items on the conveyer belt, remove your shoes and any metallic items, and enter a body scanner. As you hold still for a few seconds with your feet spread apart and your arms extended over your head, the scanner creates a generic, featureless 3D body outline revealing any metallic or nonmetallic concealed weapons or other prohibited items.

Requiring individuals to stop, remove clothing and belongings, and pose for scans impedes traffic flow in airports and other highly populated venues, such as stadiums, shopping malls, mass transit stations, and schools. To enable more efficient screening of unstructured crowds and ensure public safety, the Department of Homeland Security (DHS) Science and Technology Directorate (S&T) sponsored Lincoln Laboratory to prototype a high-resolution imaging system capable of scanning people and their belongings as they walk by. This R&D effort was conducted as part of S&T's Surface Transportation Explosive Threat Detection Program, which aims to provide the surface-transportation end user-community (e.g., mass transit) with a layered and integrated capability to detect threat items at the speed of the traveling public.

The laboratory's prototype microwave imager, which consists of a set of antennas installed on flat panels, operates under the same fundamental principle as existing body scanners: low-energy radio waves (less powerful than those transmitted by a cellphone) are transmitted from antennas toward a person's body and reflect off skin and any hidden objects; the reflected waves return to the antennas and are processed by a computer to create an image, which security personnel then review to identify any potential concealed threats.

The novelty of the laboratory's invention lies in its ability to discreetly handle a constant stream of subjects in motion, measuring each subject very quickly (within tens of milliseconds) and reconstructing 3D microwave images of each subject at a video rate. To meet these challenging requirements, the laboratory team developed a cost-effective antenna array and efficient image-reconstruction algorithms. Compared to existing systems, the laboratory's 3D microwave imager runs 100 times faster using the same computing hardware. In 2017, the team demonstrated the prototype's ability to detect various simulated threat items at varying distances on a rail platform at the Massachusetts Bay Transit Authority (MBTA) Emergency Training Center in Boston.

"The goal of our work is to provide security staff with more effective tools to protect public spaces. To that end, microwave imaging technology can quickly and unobtrusively provide visibility of items carried into a venue," says William Moulder, who led the technology's development at Lincoln Laboratory.

In 2018, the security company Liberty Defense licensed the imaging technology and entered into a cooperative research and development agreement (CRADA) with Lincoln Laboratory. Transitioning technology to industry for commercialization is part of the laboratory's role as a federally funded research and development center, and CRADAs provide a mechanism for such transition to happen. Through the CRADA, Liberty Defense maintained Lincoln Laboratory's core image-reconstruction intellectual property and made the technology enhancements required for commercialization, including an entirely new hardware architecture, radio frequency (RF) antenna modules, and a transceiver system that meets Federal Communications Commission waveform and RF performance requirements for indoor and outdoor operation. The co-organizational team facilitating the transition of the technology was recognized by the Federal Laboratory Consortium for Technology Transfer with a 2019 Excellence in Technology Transfer Award for the Northeast region.

By 2021, Liberty Defense had prototyped a walk-through security screening system, HEXWAVE. That same year, through the TSA's On-Person Screening Capability Program, Liberty Defense received a contract award to demonstrate HEXWAVE's enhanced threat-detection and high-throughput capabilities for screening aviation workers. Following successful testing of HEXWAVE at sports complexes, entertainment arenas, and shopping centers, both nationally and internationally, Liberty Defense began offering the product for sale.

"HEXWAVE is a great example of how federally funded R&D can be successfully transitioned to industry to meet real-world security needs," says Asha Rajagopal, the laboratory's chief technology transfer officer. "By working with Liberty Defense, we helped accelerate the delivery of a critical capability into the hands of those protecting public spaces."

In 2023, TSA began testing HEXWAVE as a potential replacement of metal detectors used to screen passengers in TSA PreCheck lanes. Airports across the United States started deploying HEXWAVE in 2024 to meet the TSA's employee screening mandate by the April 2026 deadline. Liberty Defense notes various other markets for HEXWAVE; the first units for commercial applications were delivered to Los Alamos National Laboratory in 2023, and the technology has since been deployed at other national labs, correctional facilities, government buildings, and courthouses.

"Liberty was extremely fortunate to license the technology from MIT Lincoln Laboratory," says Bill Frain, CEO of Liberty Defense. "From the outset, they've been a true partner — bringing not only deep innovation and technical expertise, but also a clear vision for commercial deployment. Together, we've successfully brought next-generation technology to market to help protect people in public spaces."


Designing across cultural and geographic divides

Through a collaboration between the MIT first-year learning community Terrascope, Diné College, and University of Puerto Rico, students learn fundamental design principles — and much more.


In addition to the typical rigors of MIT classes, Terrascope Subject 2.00C/1.016/EC.746 (Design for Complex Environmental Issues) poses some unusual hurdles for students to navigate: collaborating across time zones, bridging different cultural and institutional experiences, and trying to do hands-on work over Zoom. That’s because the class includes students from not only MIT, but also Diné College in Tsaile, Arizona, within the Navajo Nation, and the University of Puerto Rico-Ponce (UPRP).

Despite being thousands of miles apart, students work in teams to tackle a real-world problem for a client, based on the Terrascope theme for the year. “Understanding how to collaborate over long distances with people who are not like themselves will be an important item in many of these students’ toolbelts going forward, in some cases just as much as — or more than — any particular design technique,” says Ari Epstein, Terrascope associate director and senior lecturer. Over the past several years, Epstein has taught the class along with Joel Grimm of MIT Beaver Works and Libby Hsu of MIT D-Lab, as well instructors from the two collaborating institutions. Undergraduate teaching fellows from all three schools are also key members of the instructional staff.

Since the partnership began three years ago (initially with Diné College, with the addition of UPRP two years ago), the class themes have included food security and sustainable agriculture in Navajo Nation; access to reliable electrical power in Puerto Rico; and this year, increasing museum visitors’ engagement with artworks depicting mining and landscape alteration in Nevada.

Each team — which includes students from all three colleges — meets with clients online early in the term to understand their needs; then, through an iterative process, teams work on designing prototypes. During MIT’s spring break, teams travel to meet with the clients onsite to get feedback and continue to refine their prototypes. At the end of the term, students present their final products to the clients, an expert panel, and their communities at a hybrid showcase event held simultaneously on all three campuses.

Free-range design engineering

“I really loved the class,” says Graciela Leon, a second-year mechanical engineering major who took the subject in 2024. “It was not at all what I was expecting,” she adds. While the learning objectives on the syllabus are fairly traditional — using an iterative engineering design process, developing teamwork skills, and deepening communication skills, to name a few — the approach is not. “Terrascope is just kind of like throwing you into a real-world problem … it feels a lot more like you are being trusted with this actual challenge,” Leon says.

The 2024 challenge was to find a way to help the clients, Puerto Rican senior citizens, turn on gasoline-powered generators when the electrical power grid fails; some of them struggle with the pull cords necessary to start the generators. The students were tasked with designing solutions to make starting the generators easier.

Terrascope instructors teach fundamental skills such as iterative design spirals and scrum workflow frameworks, but they also give students ample freedom to follow their ideas. Leon admits she was a bit frustrated at first, because she wasn’t sure what she was supposed to be doing. “I wanted to be building things and thought, ‘Wow, I have to do all these other things, I have to write some kind of client profile and understand my client’s needs.’ I was just like, ‘Hand me a drill! I want to design something!’”

When he took the class last year, Uziel Rodriguez-Andujar was also thrown off initially by the independence teams had. Now a second-year UPRP student in mechanical engineering, he’s accustomed to lecture-based classes. “What I found so interesting is the way [they] teach the class, which is, ‘You make your own project, and we need you to find a solution to this. How it will look, and when you have it — that’s up to you,’” he says.

Clearing hurdles

Teaching the course on three different campuses introduces a number of challenges for students and instructors to overcome — among them, operating in three different time zones, overcoming language barriers, navigating different cultural and institutional norms, communicating effectively, and designing and building prototypes over Zoom.

“The culture span is huge,” explains Epstein. “There are different ways of speaking, different ways of listening, and each organization has different resources.”

First-year MIT student EJ Dominguez found that one of the biggest obstacles was trying to convey ideas to teammates clearly. He took the class this year, when the theme revolved around the environmental impacts of lithium mining. The client, the Nevada Museum of Art, wanted to find ways to engage visitors with its artwork collection related to mining-related landscape changes.

Dominguez and his team designed a pendulum with a light affixed to it that illuminates a painting by a Native American artist. When the pendulum swings, it changes how the visitor experiences the artwork. The team built parts for the pendulum on different campuses, and they reached a point where they realized their pieces were incompatible. “We had different visions of what we wanted for the project, and different vocabulary we were using to describe our ideas. Sometimes there would be a misunderstanding … It required a lot of honesty from each campus to be like, ‘OK, I thought we were doing exactly this,’ and obviously in a really respectful way.”

It’s not uncommon for students at Diné College and UPRP to experience an initial hurdle that their MIT peers do not. Epstein notes, “There’s a tendency for some folks outside MIT to see MIT students as these brilliant people that they don’t belong in the same room with.” But the other students soon realize not only that they can hold their own intellectually, but also that their backgrounds and experiences are incredibly valuable. “Their life experiences actually put them way ahead of many MIT students in some ways, when you think about design and fabrication, like repairing farm equipment or rebuilding transmissions,” he adds.

That’s how Cauy Bia felt when he took the class in 2024. Currently a first-year graduate student in biology at Diné College, Bia questioned whether he’d be on par with the MIT students. “I’ve grown up on a farm, and we do a lot of building, a lot of calculations, a lot of hands-on stuff. But going into this, I was sweating it so hard [wondering], ‘Am I smart enough to work with these students?’ And then, at the end of the day, that was never an issue,” he says.

The value of reflection

Every two weeks, Terrascope students write personal reflections about their experiences in the class, which helps them appreciate their academic and personal development. “I really felt that I had undergone a process that made me grow as an engineer,” says Leon. “I understood the importance of people and engineering more, including teamwork, working with clients, and de-centering the project away from what I wanted to build and design.”

When Bia began the semester, he says, he was more of a “make-or-break-type person” and tended to see things in black and white. “But working with all three campuses, it kind of opened up my thought process so I can assess more ideas, more voices and opinions. And I can get broader perspectives and get bigger ideas from that point,” he says. It was also a powerful experience culturally for him, particularly “drawing parallels between Navajo history, Navajo culture, and seeing the similarities between that and Puerto Rican culture, seeing how close we are as two nations.”

Rodriguez-Andujar gained an appreciation for the “constant struggle between simplicity and complexity” in engineering. “You have all these engineers trying to over-engineer everything,” he says. “And after you get your client feedback [halfway through the semester], it turns out, ‘Oh, that doesn’t work for me. I’m sorry — you have to scale it down like a hundred times and make it a lot simpler.’”

For instructors, the students’ reflections are invaluable as they strive to make improvements every year. In many ways, you might say the class is an iterative design spiral, too. “The past three years have themselves been prototypes,” Epstein says, “and all of the instructional staff are looking forward to continuing these exciting partnerships.”


A bionic knee integrated into tissue can restore natural movement

In a small clinical study, users of this prosthesis navigated more easily and said the limb felt more like part of their body.


MIT researchers have developed a new bionic knee that can help people with above-the-knee amputations walk faster, climb stairs, and avoid obstacles more easily than they could with a traditional prosthesis.

Unlike prostheses in which the residual limb sits within a socket, the new system is directly integrated with the user’s muscle and bone tissue. This enables greater stability and gives the user much more control over the movement of the prosthesis.

Participants in a small clinical study also reported that the limb felt more like a part of their own body, compared to people who had more traditional above-the-knee amputations.

“A prosthesis that's tissue-integrated — anchored to the bone and directly controlled by the nervous system — is not merely a lifeless, separate device, but rather a system that is carefully integrated into human physiology, offering a greater level of prosthetic embodiment. It’s not simply a tool that the human employs, but rather an integral part of self,” says Hugh Herr, a professor of media arts and sciences, co-director of the K. Lisa Yang Center for Bionics at MIT, an associate member of MIT’s McGovern Institute for Brain Research, and the senior author of the new study.

Tony Shu PhD ’24 is the lead author of the paper, which appears today in Science.

Better control

Over the past several years, Herr’s lab has been working on new prostheses that can extract neural information from muscles left behind after an amputation and use that information to help guide a prosthetic limb.

During a traditional amputation, pairs of muscles that take turns stretching and contracting are usually severed, disrupting the normal agonist-antagonist relationship of the muscles. This disruption makes it very difficult for the nervous system to sense the position of a muscle and how fast it’s contracting.

Using the new surgical approach developed by Herr and his colleagues, known as agonist-antagonist myoneuronal interface (AMI), muscle pairs are reconnected during surgery so that they still dynamically communicate with each other within the residual limb. This sensory feedback helps the wearer of the prosthesis to decide how to move the limb, and also generates electrical signals that can be used to control the prosthetic limb.

In a 2024 study, the researchers showed that people with amputations below the knee who received the AMI surgery were able to walk faster and navigate around obstacles much more naturally than people with traditional below-the-knee amputations.

In the new study, the researchers extended the approach to better serve people with amputations above the knee. They wanted to create a system that could not only read out signals from the muscles using AMI but also be integrated into the bone, offering more stability and better sensory feedback.

To achieve that, the researchers developed a procedure to insert a titanium rod into the residual femur bone at the amputation site. This implant allows for better mechanical control and load bearing than a traditional prosthesis. Additionally, the implant contains 16 wires that collect information from electrodes located on the AMI muscles inside the body, which enables more accurate transduction of the signals coming from the muscles.

This bone-integrated system, known as e-OPRA, transmits AMI signals to a new robotic controller developed specifically for this study. The controller uses this information to calculate the torque necessary to move the prosthesis the way that the user wants it to move.

“All parts work together to better get information into and out of the body and better interface mechanically with the device,” Shu says. “We’re directly loading the skeleton, which is the part of the body that’s supposed to be loaded, as opposed to using sockets, which is uncomfortable and can lead to frequent skin infections.”

In this study, two subjects received the combined AMI and e-OPRA system, known as an osseointegrated mechanoneural prosthesis (OMP). These users were compared with eight who had the AMI surgery but not the e-OPRA implant, and seven users who had neither AMI nor e-OPRA. All subjects took a turn at using an experimental powered knee prosthesis developed by the lab.

The researchers measured the participants’ ability to perform several types of tasks, including bending the knee to a specified angle, climbing stairs, and stepping over obstacles. In most of these tasks, users with the OMP system performed better than the subjects who had the AMI surgery but not the e-OPRA implant, and much better than users of traditional prostheses.

“This paper represents the fulfillment of a vision that the scientific community has had for a long time — the implementation and demonstration of a fully physiologically integrated, volitionally controlled robotic leg,” says Michael Goldfarb, a professor of mechanical engineering and director of the Center for Intelligent Mechatronics at Vanderbilt University, who was not involved in the research. “This is really difficult work, and the authors deserve tremendous credit for their efforts in realizing such a challenging goal.”

A sense of embodiment

In addition to testing gait and other movements, the researchers also asked questions designed to evaluate participants’ sense of embodiment — that is, to what extent their prosthetic limb felt like a part of their own body.

Questions included whether the patients felt as if they had two legs, if they felt as if the prosthesis was part of their body, and if they felt in control of the prosthesis. Each question was designed to evaluate the participants’ feelings of agency, ownership of device, and body representation.

The researchers found that as the study went on, the two participants with the OMP showed much greater increases in their feelings of agency and ownership than the other subjects.

“Another reason this paper is significant is that it looks into these embodiment questions and it shows large improvements in that sensation of embodiment,” Herr says. “No matter how sophisticated you make the AI systems of a robotic prosthesis, it’s still going to feel like a tool to the user, like an external device. But with this tissue-integrated approach, when you ask the human user what is their body, the more it’s integrated, the more they’re going to say the prosthesis is actually part of self.”

The AMI procedure is now done routinely on patients with below-the-knee amputations at Brigham and Women’s Hospital, and Herr expects it will soon become the standard for above-the-knee amputations as well. The combined OMP system will need larger clinical trials to receive FDA approval for commercial use, which Herr expects may take about five years.

The research was funded by the Yang Tan Collective and DARPA.


Supporting mission-driven space innovation, for Earth and beyond

Aurelia Institute, founded by a team from MIT, serves as a research lab, an education and outreach center, and a policy hub for the space industry.


As spaceflight becomes more affordable and accessible, the story of human life in space is just beginning. Aurelia Institute wants to make sure that future benefits all of humanity — whether in space or here on Earth.

Founded by Ariel Ekblaw SM ’17, PhD ’20; Danielle DeLatte ’11; and former MIT research scientist Sana Sharma, the nonprofit institute serves as a research lab for space technology and architecture, a center for education and outreach, and a policy hub dedicated to inspiring more people to work in the space industry.

At the heart of the Aurelia Institute’s mission is a commitment to making space accessible to all people. A big part of that work involves annual microgravity flights that Ekblaw says are equal part research missions, workforce training, and inspiration for the next generation of space enthusiasts.

“We’ve done that every year,” Ekblaw says of the flights. “We now have multiple cohorts of students that connect across years. It brings together people from very different backgrounds. We’ve had artists, designers, architects, ethicists, teachers, and others fly with us. In our R&D, we are interested in space infrastructure for the public good. That’s why we’re directing our technology portfolios toward near-term, massive infrastructure projects in low-Earth orbit that benefit life on Earth.”

From the annual flights to the Institute’s self-assembling space architecture technology known as TESSERAE, much of Aurelia’s work is an extension of projects Ekblaw started as a graduate student at MIT.

“My life trajectory changed when I came to MIT,” says Ekblaw, who is still a visiting researcher at MIT. “I am incredibly grateful for the education I got in the Media Lab and the Department of Aeronautics and Astronautics. MIT is what gave me the skill, the technology, and the community to be able to spin out Aurelia and do something important in the space industry at scale.”

“MIT changes lives”

Ekblaw has always been passionate about space. As an undergraduate at Yale University, she took part in a NASA microgravity flight as part of a research project. In the first year of her PhD program at MIT, she led the launch of the Space Exploration Initiative, a cross-Institute effort to drive innovation at the frontiers of space exploration. The ongoing initiative started as a research group but soon raised enough money to conduct microgravity flights and, more recently, conduct missions to the International Space Station and the moon.

“The Media Lab was like magic in the years I was there,” Ekblaw says. “It had this sense of what we used to call ‘anti-disciplinary permission-lessness.’ You could get funding to explore really different and provocative ideas. Our mission was to democratize access to space.”

In 2016, while taking a class taught by Neri Oxman, then a professor in the Media Lab, Ekblaw got the idea for the TESSERAE Project, in which tiles autonomously self-assemble into spherical space structures.

“I was thinking about the future of human flight, and the class was a seeding moment for me,” Ekblaw says. “I realized self-assembly works OK on Earth, it works particularly well at small scales like in biology, but it generally struggles with the force of gravity once you get to larger objects. But microgravity in space was a perfect application for self-assembly.”

That semester, Ekblaw was also taking Professor Neil Gershenfeld’s class MAS.863 (How to Make (Almost) Anything), where she began building prototypes. Over the ensuing years of her PhD, subsequent versions of the TESSERAE system were tested on microgravity flights run by the Space Exploration Initiative, in a suborbital mission with the space company Blue Origin, and as part of a 30-day mission aboard the International Space Station.

“MIT changes lives,” Ekblaw says. “It completely changed my life by giving me access to real spaceflight opportunities. The capstone data for my PhD was from an International Space Station mission.”

After earning her PhD in 2020, Ekblaw decided to ask two researchers from the MIT community and the Space Exploration Initiative, Danielle DeLatte and Sana Sharma, to partner with her to further develop research projects, along with conducting space education and policy efforts. That collaboration turned into Aurelia.

“I wanted to scale the work I was doing with the Space Exploration Initiative, where we bring in students, introduce them to zero-g flights, and then some graduate to sub-orbital, and eventually flights to the International Space Station,” Ekblaw says. “What would it look like to bring that out of MIT and bring that opportunity to other students and mid-career people from all walks of life?”

Every year, Aurelia charters a microgravity flight, bringing about 25 people along to conduct 10 to 15 experiments. To date, nearly 200 people have participated in the flights across the Space Exploration Initiative and Aurelia, and more than 70 percent of those fliers have continued to pursue activities in the space industry post-flight.

Aurelia also offers open-source classes on designing research projects for microgravity environments and contributes to several education and community-building activities across academia, industry, and the arts.

In addition to those education efforts, Aurelia has continued testing and improving the TESSERAE system. In 2022, TESSERAE was brought on the first private mission to the International Space Station, where astronauts conducted tests around the system’s autonomous self-assembly, disassembly, and stability. Aurelia will return to the International Space Station in early 2026 for further testing as part of a recent grant from NASA.

The work led Aurelia to recently spin off the TESSERAE project into a separate, for-profit company. Ekblaw expects there to be more spinoffs out of Aurelia in coming years.

Designing for space, and Earth

The self-assembly work is only one project in Aurelia’s portfolio. Others are focused on designing human-scale pavilions and other habitats, including a space garden and a massive, 20-foot dome depicting the interior of space architectures in the future. This space habitat pavilion was recently deployed as part of a six-month exhibit at the Seattle Museum of Flight.

“The architectural work is asking, ‘How are we going to outfit these systems and actually make the habitats part of a life worth living?’” Ekblaw explains.

With all of its work, Aurelia’s team looks at space as a testbed to bring new technologies and ideas back to our own planet.

“When you design something for the rigors of space, you often hit on really robust technologies for Earth,” she says.


Changing the conversation in health care

The Language/AI Incubator, an MIT Human Insight Collaborative project, is investigating how AI can improve communications among patients and practitioners.


Generative artificial intelligence is transforming the ways humans write, read, speak, think, empathize, and act within and across languages and cultures. In health care, gaps in communication between patients and practitioners can worsen patient outcomes and prevent improvements in practice and care. The Language/AI Incubator, made possible through funding from the MIT Human Insight Collaborative (MITHIC), offers a potential response to these challenges. 

The project envisions a research community rooted in the humanities that will foster interdisciplinary collaboration across MIT to deepen understanding of generative AI’s impact on cross-linguistic and cross-cultural communication. The project’s focus on health care and communication seeks to build bridges across socioeconomic, cultural, and linguistic strata.

The incubator is co-led by Leo Celi, a physician and the research director and senior research scientist with the Institute for Medical Engineering and Science (IMES), and Per Urlaub, professor of the practice in German and second language studies and director of MIT’s Global Languages program. 

“The basis of health care delivery is the knowledge of health and disease,” Celi says. “We’re seeing poor outcomes despite massive investments because our knowledge system is broken.”

A chance collaboration

Urlaub and Celi met during a MITHIC launch event. Conversations during the event reception revealed a shared interest in exploring improvements in medical communication and practice with AI.

“We’re trying to incorporate data science into health-care delivery,” Celi says. “We’ve been recruiting social scientists [at IMES] to help advance our work, because the science we create isn’t neutral.”

Language is a non-neutral mediator in health care delivery, the team believes, and can be a boon or barrier to effective treatment. “Later, after we met, I joined one of his working groups whose focus was metaphors for pain: the language we use to describe it and its measurement,” Urlaub continues. “One of the questions we considered was how effective communication can occur between doctors and patients.”

Technology, they argue, impacts casual communication, and its impact depends on both users and creators. As AI and large language models (LLMs) gain power and prominence, their use is broadening to include fields like health care and wellness. 

Rodrigo Gameiro, a physician and researcher with MIT’s Laboratory for Computational Physiology, is another program participant. He notes that work at the laboratory centers responsible AI development and implementation. Designing systems that leverage AI effectively, particularly when considering challenges related to communicating across linguistic and cultural divides that can occur in health care, demands a nuanced approach. 

“When we build AI systems that interact with human language, we’re not just teaching machines how to process words; we’re teaching them to navigate the complex web of meaning embedded in language,” Gameiro says.

Language’s complexities can impact treatment and patient care. “Pain can only be communicated through metaphor,” Urlaub continues, “but metaphors don’t always match, linguistically and culturally.” Smiley faces and one-to-10 scales — pain measurement tools English-speaking medical professionals may use to assess their patients — may not travel well across racial, ethnic, cultural, and language boundaries.

“Science has to have a heart” 

LLMs can potentially help scientists improve health care, although there are some systemic and pedagogical challenges to consider. Science can focus on outcomes to the exclusion of the people it’s meant to help, Celi argues. “Science has to have a heart,” he says. “Measuring students’ effectiveness by counting the number of papers they publish or patents they produce misses the point.”

The point, Urlaub says, is to investigate carefully while simultaneously acknowledging what we don’t know, citing what philosophers call Epistemic Humility. Knowledge, the investigators argue, is provisional, and always incomplete. Deeply held beliefs may require revision in light of new evidence. 

“No one’s mental view of the world is complete,” Celi says. “You need to create an environment in which people are comfortable acknowledging their biases.”

“How do we share concerns between language educators and others interested in AI?” Urlaub asks. “How do we identify and investigate the relationship between medical professionals and language educators interested in AI’s potential to aid in the elimination of gaps in communication between doctors and patients?” 

Language, in Gameiro’s estimation, is more than just a tool for communication. “It reflects culture, identity, and power dynamics,” he says. In situations where a patient might not be comfortable describing pain or discomfort because of the physician’s position as an authority, or because their culture demands yielding to those perceived as authority figures, misunderstandings can be dangerous. 

Changing the conversation

AI’s facility with language can help medical professionals navigate these areas more carefully, providing digital frameworks offering valuable cultural and linguistic contexts in which patient and practitioner can rely on data-driven, research-supported tools to improve dialogue. Institutions need to reconsider how they educate medical professionals and invite the communities they serve into the conversation, the team says. 

‘We need to ask ourselves what we truly want,” Celi says. “Why are we measuring what we’re measuring?” The biases we bring with us to these interactions — doctors, patients, their families, and their communities — remain barriers to improved care, Urlaub and Gameiro say.

“We want to connect people who think differently, and make AI work for everyone,” Gameiro continues. “Technology without purpose is just exclusion at scale.”

“Collaborations like these can allow for deep processing and better ideas,” Urlaub says.

Creating spaces where ideas about AI and health care can potentially become actions is a key element of the project. The Language/AI Incubator hosted its first colloquium at MIT in May, which was led by Mena Ramos, a physician and the co-founder and CEO of the Global Ultrasound Institute

The colloquium also featured presentations from Celi, as well as Alfred Spector, a visiting scholar in MIT’s Department of Electrical Engineering and Computer Science, and Douglas Jones, a senior staff member in the MIT Lincoln Laboratory’s Human Language Technology Group. A second Language/AI Incubator colloquium is planned for August.

Greater integration between the social and hard sciences can potentially increase the likelihood of developing viable solutions and reducing biases. Allowing for shifts in the ways patients and doctors view the relationship, while offering each shared ownership of the interaction, can help improve outcomes. Facilitating these conversations with AI may speed the integration of these perspectives. 

“Community advocates have a voice and should be included in these conversations,” Celi says. “AI and statistical modeling can’t collect all the data needed to treat all the people who need it.”

Community needs and improved educational opportunities and practices should be coupled with cross-disciplinary approaches to knowledge acquisition and transfer. The ways people see things are limited by their perceptions and other factors. “Whose language are we modeling?” Gameiro asks about building LLMs. “Which varieties of speech are being included or excluded?” Since meaning and intent can shift across those contexts, it’s important to remember these when designing AI tools. 

“AI is our chance to rewrite the rules”

While there’s lots of potential in the collaboration, there are serious challenges to overcome, including establishing and scaling the technological means to improve patient-provider communication with AI, extending opportunities for collaboration to marginalized and underserved communities, and reconsidering and revamping patient care. 

But the team isn’t daunted.

Celi believes there are opportunities to address the widening gap between people and practitioners while addressing gaps in health care. “Our intent is to reattach the string that’s been cut between society and science,” he says. “We can empower scientists and the public to investigate the world together while also acknowledging the limitations engendered in overcoming their biases.”

Gameiro is a passionate advocate for AI’s ability to change everything we know about medicine. “I’m a medical doctor, and I don’t think I’m being hyperbolic when I say I believe AI is our chance to rewrite the rules of what medicine can do and who we can reach,” he says.

“Education changes humans from objects to subjects,” Urlaub argues, describing the difference between disinterested observers and active and engaged participants in the new care model he hopes to build. “We need to better understand technology’s impact on the lines between these states of being.”

Celi, Gameiro, and Urlaub each advocate for MITHIC-like spaces across health care, places where innovation and collaboration are allowed to occur without the kinds of arbitrary benchmarks institutions have previously used to mark success.

“AI will transform all these sectors,” Urlaub believes. “MITHIC is a generous framework that allows us to embrace uncertainty with flexibility.”

“We want to employ our power to build community among disparate audiences while admitting we don’t have all the answers,” Celi says. “If we fail, it’s because we failed to dream big enough about how a reimagined world could look.”


AI shapes autonomous underwater “gliders”

An AI pipeline developed by CSAIL researchers enables unique hydrodynamic designs for bodyboard-sized vehicles that glide underwater and could help scientists gather marine data.


Marine scientists have long marveled at how animals like fish and seals swim so efficiently despite having different shapes. Their bodies are optimized for efficient, hydrodynamic aquatic navigation so they can exert minimal energy when traveling long distances.

Autonomous vehicles can drift through the ocean in a similar way, collecting data about vast underwater environments. However, the shapes of these gliding machines are less diverse than what we find in marine life — go-to designs often resemble tubes or torpedoes, since they’re fairly hydrodynamic as well. Plus, testing new builds requires lots of real-world trial-and-error.

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the University of Wisconsin at Madison propose that AI could help us explore uncharted glider designs more conveniently. Their method uses machine learning to test different 3D designs in a physics simulator, then molds them into more hydrodynamic shapes. The resulting model can be fabricated via a 3D printer using significantly less energy than hand-made ones.

The MIT scientists say that this design pipeline could create new, more efficient machines that help oceanographers measure water temperature and salt levels, gather more detailed insights about currents, and monitor the impacts of climate change. The team demonstrated this potential by producing two gliders roughly the size of a boogie board: a two-winged machine resembling an airplane, and a unique, four-winged object resembling a flat fish with four fins.

Peter Yichen Chen, MIT CSAIL postdoc and co-lead researcher on the project, notes that these designs are just a few of the novel shapes his team’s approach can generate. “We’ve developed a semi-automated process that can help us test unconventional designs that would be very taxing for humans to design,” he says. “This level of shape diversity hasn’t been explored previously, so most of these designs haven’t been tested in the real world.”

But how did AI come up with these ideas in the first place? First, the researchers found 3D models of over 20 conventional sea exploration shapes, such as submarines, whales, manta rays, and sharks. Then, they enclosed these models in “deformation cages” that map out different articulation points that the researchers pulled around to create new shapes.

The CSAIL-led team built a dataset of conventional and deformed shapes before simulating how they would perform at different “angles-of-attack” — the direction a vessel will tilt as it glides through the water. For example, a swimmer may want to dive at a -30 degree angle to retrieve an item from a pool.

These diverse shapes and angles of attack were then used as inputs for a neural network that essentially anticipates how efficiently a glider shape will perform at particular angles and optimizes it as needed.

Giving gliding robots a lift

The team’s neural network simulates how a particular glider would react to underwater physics, aiming to capture how it moves forward and the force that drags against it. The goal: find the best lift-to-drag ratio, representing how much the glider is being held up compared to how much it’s being held back. The higher the ratio, the more efficiently the vehicle travels; the lower it is, the more the glider will slow down during its voyage.

Lift-to-drag ratios are key for flying planes: At takeoff, you want to maximize lift to ensure it can glide well against wind currents, and when landing, you need sufficient force to drag it to a full stop.

Niklas Hagemann, an MIT graduate student in architecture and CSAIL affiliate, notes that this ratio is just as useful if you want a similar gliding motion in the ocean.

“Our pipeline modifies glider shapes to find the best lift-to-drag ratio, optimizing its performance underwater,” says Hagemann, who is also a co-lead author on a paper that was presented at the International Conference on Robotics and Automation in June. “You can then export the top-performing designs so they can be 3D-printed.”

Going for a quick glide

While their AI pipeline seemed realistic, the researchers needed to ensure its predictions about glider performance were accurate by experimenting in more lifelike environments.

They first fabricated their two-wing design as a scaled-down vehicle resembling a paper airplane. This glider was taken to MIT’s Wright Brothers Wind Tunnel, an indoor space with fans that simulate wind flow. Placed at different angles, the glider’s predicted lift-to-drag ratio was only about 5 percent higher on average than the ones recorded in the wind experiments — a small difference between simulation and reality.

A digital evaluation involving a visual, more complex physics simulator also supported the notion that the AI pipeline made fairly accurate predictions about how the gliders would move. It visualized how these machines would descend in 3D.

To truly evaluate these gliders in the real world, though, the team needed to see how their devices would fare underwater. They printed two designs that performed the best at specific points-of-attack for this test: a jet-like device at 9 degrees and the four-wing vehicle at 30 degrees.

Both shapes were fabricated in a 3D printer as hollow shells with small holes that flood when fully submerged. This lightweight design makes the vehicle easier to handle outside of the water and requires less material to be fabricated. The researchers placed a tube-like device inside these shell coverings, which housed a range of hardware, including a pump to change the glider’s buoyancy, a mass shifter (a device that controls the machine’s angle-of-attack), and electronic components.

Each design outperformed a handmade torpedo-shaped glider by moving more efficiently across a pool. With higher lift-to-drag ratios than their counterpart, both AI-driven machines exerted less energy, similar to the effortless ways marine animals navigate the oceans.

As much as the project is an encouraging step forward for glider design, the researchers are looking to narrow the gap between simulation and real-world performance. They are also hoping to develop machines that can react to sudden changes in currents, making the gliders more adaptable to seas and oceans.

Chen adds that the team is looking to explore new types of shapes, particularly thinner glider designs. They intend to make their framework faster, perhaps bolstering it with new features that enable more customization, maneuverability, or even the creation of miniature vehicles.

Chen and Hagemann co-led research on this project with OpenAI researcher Pingchuan Ma SM ’23, PhD ’25. They authored the paper with Wei Wang, a University of Wisconsin at Madison assistant professor and recent CSAIL postdoc; John Romanishin ’12, SM ’18, PhD ’23; and two MIT professors and CSAIL members: lab director Daniela Rus and senior author Wojciech Matusik. Their work was supported, in part, by a Defense Advanced Research Projects Agency (DARPA) grant and the MIT-GIST Program.


Collaborating with the force of nature

Ongoing research by three architecture faculty aims to yield structures that protect communities from the devastation of volcanic eruptions.


Common sense tells us to run from molten lava flowing from active volcanoes. But MIT professors J. Jih, Cristina Parreño Alonso, and Skylar Tibbits — faculty in the Department of Architecture at the School of Architecture and Planning — have their bags packed to head to southwest Iceland in anticipation of an imminent volcanic eruption. The Nordic island nation is currently experiencing a period of intense seismic activity; seven volcanic eruptions have taken place in its southern peninsula in under a year.

Earlier this year, the faculty built and placed a series of lightweight, easily deployable steel structures close to the volcano, where a few of the recent eruptions have taken place; several more structures are on trucks waiting to be delivered to sites where fissures open and lava oozes out. Cameras are in place to record what happens when the lava meets and hits these structures to help understand the lava flows.

This new research explores what type of shapes and materials can be used to interact with lava and successfully divert it from heading in the direction of habitats or critical infrastructure that lie in its path. Their work is supported by a Professor Amar. G. Bose Research Grant.

“We’re trying to imagine new ways of conceptualizing infrastructure when it relates to lava and volcanic eruptions,” says Jih, an associate professor of the practice. “Lovely for us as designers, physical prototyping is the only way you can test some of these ideas out.” 

Currently, the Icelandic Department of Civic Protection and Emergency Management and an engineering group, EFLA, are diverting the lava with massive berms (approximately 44 to 54 yards in length and 9 yards in height) made from earth and stone.

Berms protecting the town of Grindavik, a power plant, and the popular Blue Lagoon geothermal spa have met with mixed results. In November 2024, a volcano erupted for the seventh time in less than a year, forcing the evacuation of town residents and the Blue Lagoon’s guests and employees. The latter’s parking lot was consumed by lava.

Sigurdur Thorsteinsson, chief brand, design, and innovation officer of the Blue Lagoon, as well as a designer and a partner in Design Group Italia, was on site for this eruption and several others.

“Some magma went into the city of Grindavik and three or four houses were destroyed,” says Thorsteinsson. “One of our employees watched her house go under magma on television, which was an emotional moment.”

While staff at the Blue Lagoon have become very efficient at evacuating guests, says Thorsteinsson, each eruption forces the tourist destination to close and townspeople to evacuate, disrupting lives and livelihoods.

“You cannot really stop the magma,” says Thorsteinsson, who is working with the MIT faculty on this research project. “It’s too powerful.”

Tibbits, associate professor of design research and founder and co-director of the Self-Assembly Lab, agrees. His research explores how to guide or work with the forces of nature.

Last year, Tibbits and Jih were in Iceland on another research project when erupting volcanoes interrupted their work. The two started thinking about how the lava could be redirected.

“The question is: Can we find more strategic interventions in the field that could work with the lava, rather than fight it?” says Tibbits.

To investigate what kinds of materials would withstand this type of interaction, they invited Parreño Alonso, a senior lecturer in the Department of Architecture, to join them.

“Cristina, being the department authority on magma, was an obvious and important partner for us,” says Jih with a smile.

Parreño Alonso has been working with volcanic rock for years and taught a series of design studios exploring volcanic rock as an architectural material. She also has proposed designing structures to engage directly with lava flows and recently has been examining volcanic rock in a molten state and melting basalt in MIT’s foundry with Michael Tarkanian, a senior lecturer in MIT’s Department of Materials Science and Engineering, and Metals Lab director. For this project, she is exploring the potential of molten rock as a substitute for concrete, a widely used material because of its pliability.

“It’s exciting how this idea of working with volcanoes was taking shape in parallel, from different angles, within the same department,” says Parreño Alonso. “I love how these parallel interests have led to such a beautiful collaboration.”

She also sees other opportunities by collaborating with these forces of nature.

“We are interested in the potential of generating something out of the interaction with the lava,” she says. “Could it be a landscape that becomes a park? There are many possibilities.”

The steel structures were first tested at MIT’s Metals Lab with Tarkanian and then built onsite in Iceland. The team wanted to make the structures lightweight so they could be quickly set up in the field, but strong enough so they wouldn’t be easily destroyed. Various designs were created; this iteration of the design has V-shaped structures that can guide the lava to flow around them, or they can be reconfigured as ramps or tunnels.

“There is a road that has been hit by many of the recent eruptions and must keep being rebuilt,” says Tibbits. “We created two ramps that could in the future serve as tunnels, allowing the lava to flow over the road and create a type of lava cave where the cars could drive under the cooled lava.”

Tibbits says they see the structures in the field now as an initial intervention. After documenting and studying how they interact with the lava, the architects will develop new iterations of what they believe will eventually become critical infrastructure for locations around the world with active volcanoes.

“If we can show and prove what kinds of shapes and structures and what kinds of materials can divert magma flows, I think it’s incredibly valuable research,” says Thorsteinsson.

Thorsteinsson lives in Italy half of the year and says the volcanoes there — Mount Etna in Sicily and Mount Vesuvius in the Gulf of Naples — pose a greater danger than those in Iceland because of the densely populated neighborhoods nearby. Volcanoes in Hawaii and Japan are in similarly populated areas.

“Whatever information you can learn about diverting magma flows to other directions and what kinds of structures are needed — it would be priceless,” he says.


Implantable device could save diabetes patients from dangerously low blood sugar

The new implant carries a reservoir of glucagon that can be stored under the skin and deployed during an emergency — with no injections needed.


For people with Type 1 diabetes, developing hypoglycemia, or low blood sugar, is an ever-present threat. When glucose levels become extremely low, it creates a life-threatening situation for which the standard treatment of care is injecting a hormone called glucagon.

As an emergency backup, for cases where patients may not realize that their blood sugar is dropping to dangerous levels, MIT engineers have designed an implantable reservoir that can remain under the skin and be triggered to release glucagon when blood sugar levels get too low.

This approach could also help in cases where hypoglycemia occurs during sleep, or for diabetic children who are unable to administer injections on their own.

“This is a small, emergency-event device that can be placed under the skin, where it is ready to act if the patient’s blood sugar drops too low,” says Daniel Anderson, a professor in MIT’s Department of Chemical Engineering, a member of MIT’s Koch Institute for Integrative Cancer Research and Institute for Medical Engineering and Science (IMES), and the senior author of the study. “Our goal was to build a device that is always ready to protect patients from low blood sugar. We think this can also help relieve the fear of hypoglycemia that many patients, and their parents, suffer from.”

The researchers showed that this device could also be used to deliver emergency doses of epinephrine, a drug that is used to treat heart attacks and can also prevent severe allergic reactions, including anaphylactic shock.

Siddharth Krishnan, a former MIT research scientist who is now an assistant professor of electrical engineering at Stanford University, is the lead author of the study, which appears today in Nature Biomedical Engineering.

Emergency response

Most patients with type 1 diabetes use daily insulin injections to help their body absorb sugar and prevent their blood sugar levels from getting too high. However, if their blood sugar levels get too low, they develop hypoglycemia, which can lead to confusion and seizures, and may be fatal if it goes untreated.

To combat hypoglycemia, some patients carry preloaded syringes of glucagon, a hormone that stimulates the liver to release glucose into the bloodstream. However, it isn’t always easy for people, especially children, to know when they are becoming hypoglycemic.

“Some patients can sense when they’re getting low blood sugar, and go eat something or give themselves glucagon,” Anderson says. “But some are unaware that they’re hypoglycemic, and they can just slip into confusion and coma. This is also a problem when patients sleep, as they are reliant on glucose sensor alarms to wake them when sugar drops dangerously low.”

To make it easier to counteract hypoglycemia, the MIT team set out to design an emergency device that could be triggered either by the person using it, or automatically by a sensor.

The device, which is about the size of a quarter, contains a small drug reservoir made of a 3D-printed polymer. The reservoir is sealed with a special material known as a shape-memory alloy, which can be programmed to change its shape when heated. In this case, the researcher used a nickel-titanium alloy that is programmed to curl from a flat slab into a U-shape when heated to 40 degrees Celsius.

Like many other protein or peptide drugs, glucagon tends to break down quickly, so the liquid form can’t be stored long-term in the body. Instead, the MIT team created a powdered version of the drug, which remains stable for much longer and stays in the reservoir until released.

Each device can carry either one or four doses of glucagon, and it also includes an antenna tuned to respond to a specific frequency in the radiofrequency range. That allows it to be remotely triggered to turn on a small electrical current, which is used to heat the shape-memory alloy. When the temperature reaches the 40-degree threshold, the slab bends into a U shape, releasing the contents of the reservoir.

Because the device can receive wireless signals, it could also be designed so that drug release is triggered by a glucose monitor when the wearer’s blood sugar drops below a certain level.

“One of the key features of this type of digital drug delivery system is that you can have it talk to sensors,” Krishnan says. “In this case, the continuous glucose-monitoring technology that a lot of patients use is something that would be easy for these types of devices to interface with.”

Reversing hypoglycemia

After implanting the device in diabetic mice, the researchers used it to trigger glucagon release as the animals’ blood sugar levels were dropping. Within less than 10 minutes of activating the drug release, blood sugar levels began to level off, allowing them to remain within the normal range and avert hypoglycemia.

The researchers also tested the device with a powdered version of epinephrine. They found that within 10 minutes of drug release, epinephrine levels in the bloodstream became elevated and heart rate increased.

In this study, the researchers kept the devices implanted for up to four weeks, but they now plan to see if they can extend that time up to at least a year.

“The idea is you would have enough doses that can provide this therapeutic rescue event over a significant period of time. We don’t know exactly what that is — maybe a year, maybe a few years, and we’re currently working on establishing what the optimal lifetime is. But then after that, it would need to be replaced,” Krishnan says.

Typically, when a medical device is implanted in the body, scar tissue develops around the device, which can interfere with its function. However, in this study, the researchers showed that even after fibrotic tissue formed around the implant, they were able to successfully trigger the drug release.

The researchers are now planning for additional animal studies and hope to begin testing the device in clinical trials within the next three years.

“It’s really exciting to see our team accomplish this, which I hope will someday help diabetic patients and could more broadly provide a new paradigm for delivering any emergency medicine,” says Robert Langer, the David H. Koch Institute Professor at MIT and an author of the paper.

Other authors of the paper include Laura O’Keeffe, Arnab Rudra, Derin Gumustop, Nima Khatib, Claudia Liu, Jiawei Yang, Athena Wang, Matthew Bochenek, Yen-Chun Lu, Suman Bose, and Kaelan Reed.

The research was funded by the Leona M. and Harry B. Helmsley Charitable Trust, the National Institutes of Health, a JDRF postdoctoral fellowship, and the National Institute of Biomedical Imaging and Bioengineering.

This work was carried out, in part, through the use of MIT.nano’s facilities.


Processing our technological angst through humor

Associate Professor Benjamin Mangrum’s new book explores how we use comedy to cope with the growth of computer technology in modern life.


The first time Steve Jobs held a public demo of the Apple Macintosh, in early 1984, scripted jokes were part of the rollout. First, Jobs pulled the machine out of a bag. Then, using speech technology from Samsung, the Macintosh made a quip about rival IBM’s mainframes: “Never trust a computer you can’t lift.”

There’s a reason Jobs was doing that. For the first few decades that computing became part of cultural life, starting in the 1950s, computers seemed unfriendly, grim, and liable to work against human interests. Take the 1968 film “2001: A Space Odyssey,” in which the onboard computer, HAL, turns against the expedition’s astronauts. It’s a famous cultural touchstone. Jobs, in selling the idea of a personal computer, was using humor to ease concerns about the machines.

“Against the sense of computing as cold and numbers-driven, the fact that this computer was using voice technology to deliver jokes made it seem less forbidding, less evil,” says MIT scholar Benjamin Mangrum.

In fact, this dynamic turns up throughout modern culture, in movies, television, fiction, and the theater. We often deal with our doubts and fears about computing through humor, whether reconciling ourselves to machines or critiquing them. Now, Mangrum analyzes this phenomenon in a new book, “The Comedy of Computation: Or, How I Learned to Stop Worrying and Love Obsolescence,” published this month by Stanford University Press.

“Comedy has been a form for making this technology seem ordinary,” says Mangrum, an associate professor in MIT’s literature program. “Where in other circumstances computing might seem inhuman or impersonal, comedy allows us to incorporate it into our lives in a way that makes it make sense.”

Reversals of fortune

Mangrum’s interest in the subject was sparked partly by William Marchant’s 1955 play, “The Desk Set” — a romantic comedy later turned into a film starring Katharine Hepburn and Spencer Tracy — which queries, among other things, how office workers will co-exist alongside computers.

Perhaps against expectations, romantic comedies have turned out to be one of the most prominent contemporary forms of culture that grapple with technology and its effects on us. Mangrum, in the book, explains why: Their plot structure often involves reversals, which sometimes are extended to technology, too. Computing might seem forbidding, but it might also pull people together.

“One of the common tropes about romantic comedies is that there are characters or factors in the drama that obstruct the happy union of two people,” Mangrum observes. “And often across the arc of the drama, the obstruction or obstructive character is transformed into a partner, or collaborator, and assimilated within the happy couple’s union. That provides a template for how some cultural producers want to present the experience of computing. It begins as an obstruction and ends as a partner.”

That plot structure, Mangrum notes, dates to antiquity and was common in Shakespeare’s day. Still, as he writes in the book, there is “no timeless reality called Comedy,” as the vehicles and forms of it change over time. Beyond that, specific jokes about computing can quickly become outmoded. Steve Jobs made fun of mainframes, and the 1998 Nora Ephron comedy “You’ve Got Mail” got laughs out of dial-up modems, but those jokes might leave most people puzzled today.

“Comedy is not a fixed resource,” Mangrum says. “It’s an ever-changing toolbox.”

Continuing this evolution into the 21st century, Mangrum observes that a lot of computational comedy centers on an entire category of commentary he calls “the Great Tech-Industrial Joke.” This focuses on the gap between noble-sounding declared aspirations of technology and the sometimes-dismal outcomes it creates.

Social media, for instance, promised new worlds of connectivity and social exploration, and has benefits people enjoy — but it has also generated polarization, misinformation, and toxicity. Technology’s social effects are complex. Whole televisions shows, such as “Silicon Valley,” have dug into this terrain.

“The tech industry announces that some of its products have revolutionary or utopian aims, but the achievements of many of them fall far short of that,” Mangrum says. “It’s a funny setup for a joke. People have been claiming we’re saving the world, when actually we’re just processing emails faster. But it’s a mode of criticism aimed at big tech, since its products are more complicated.”

A complicated, messy picture

“The Comedy of Computation” digs into several other facets of modern culture and technology. The notion of personal authenticity, as Mangrum observes, is a fairly recent and modern construct in society — and it’s another sphere of life that collides with computing, since social media is full of charges of inauthenticity.

“That ethics of authenticity connects to comedy, as we make jokes about people not being authentic,” Mangrum says.

“The Comedy of Computation” has received praise from other scholars. Mark Goble, a professor of English at the University of California at Berkeley, has called it “essential for understanding the technological world in its complexity, absurdity, and vibrancy.”

For his part, Mangrum emphasizes that his book is an exploration of the full complexity of technology, culture, and society.

“There’s this really complicated, messy picture,” Mangrum says. “And comedy sometimes finds a way of experiencing and finding pleasure in that messiness, and other times it neatly wraps it up in a lesson that can make things neater than they actually are.”

Mangrum adds that the book focuses on “the combination of the threat and pleasure that’s involved across the history of the computer, in the ways it’s been assimilated and shaped society, with real advances and benefits, along with real threats, for instance to employment. I’m interested in the duality, the simultaneous and seemingly conflicting features of that experience.”


Study could lead to LLMs that are better at complex reasoning

Researchers developed a way to make large language models more adaptable to challenging tasks like strategic planning or process optimization.


For all their impressive capabilities, large language models (LLMs) often fall short when given challenging new tasks that require complex reasoning skills.

While an accounting firm’s LLM might excel at summarizing financial reports, that same model could fail unexpectedly if tasked with predicting market trends or identifying fraudulent transactions.

To make LLMs more adaptable, MIT researchers investigated how a certain training technique can be strategically deployed to boost a model’s performance on unfamiliar, difficult problems.

They show that test-time training, a method that involves temporarily updating some of a model’s inner workings during deployment, can lead to a sixfold improvement in accuracy. The researchers developed a framework for implementing a test-time training strategy that uses examples of the new task to maximize these gains.

Their work could improve a model’s flexibility, enabling an off-the-shelf LLM to adapt to complex tasks that require planning or abstraction. This could lead to LLMs that would be more accurate in many applications that require logical deduction, from medical diagnostics to supply chain management.

“Genuine learning — what we did here with test-time training — is something these models can’t do on their own after they are shipped. They can’t gain new skills or get better at a task. But we have shown that if you push the model a little bit to do actual learning, you see that huge improvements in performance can happen,” says Ekin Akyürek PhD ’25, lead author of the study.

Akyürek is joined on the paper by graduate students Mehul Damani, Linlu Qiu, Han Guo, and Jyothish Pari; undergraduate Adam Zweiger; and senior authors Yoon Kim, an assistant professor of Electrical Engineering and Computer Science (EECS) and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and Jacob Andreas, an associate professor in EECS and a member of CSAIL. The research will be presented at the International Conference on Machine Learning.

Tackling hard domains

LLM users often try to improve the performance of their model on a new task using a technique called in-context learning. They feed the model a few examples of the new task as text prompts which guide the model’s outputs.

But in-context learning doesn’t always work for problems that require logic and reasoning.

The MIT researchers investigated how test-time training can be used in conjunction with in-context learning to boost performance on these challenging tasks. Test-time training involves updating some model parameters — the internal variables it uses to make predictions — using a small amount of new data specific to the task at hand.

The researchers explored how test-time training interacts with in-context learning. They studied design choices that maximize the performance improvements one can coax out of a general-purpose LLM.

“We find that test-time training is a much stronger form of learning. While simply providing examples can modestly boost accuracy, actually updating the model with those examples can lead to significantly better performance, particularly in challenging domains,” Damani says.

In-context learning requires a small set of task examples, including problems and their solutions. The researchers use these examples to create a task-specific dataset needed for test-time training.

To expand the size of this dataset, they create new inputs by slightly changing the problems and solutions in the examples, such as by horizontally flipping some input data. They find that training the model on the outputs of this new dataset leads to the best performance.

In addition, the researchers only update a small number of model parameters using a technique called low-rank adaption, which improves the efficiency of the test-time training process.

“This is important because our method needs to be efficient if it is going to be deployed in the real world. We find that you can get huge improvements in accuracy with a very small amount of parameter training,” Akyürek says.

Developing new skills

Streamlining the process is key, since test-time training is employed on a per-instance basis, meaning a user would need to do this for each individual task. The updates to the model are only temporary, and the model reverts to its original form after making a prediction.

A model that usually takes less than a minute to answer a query might take five or 10 minutes to provide an answer with test-time training, Akyürek adds.

“We wouldn’t want to do this for all user queries, but it is useful if you have a very hard task that you want to the model to solve well. There also might be tasks that are too challenging for an LLM to solve without this method,” he says.

The researchers tested their approach on two benchmark datasets of extremely complex problems, such as IQ puzzles. It boosted accuracy as much as sixfold over techniques that use only in-context learning.

Tasks that involved structured patterns or those which used completely unfamiliar types of data showed the largest performance improvements.

“For simpler tasks, in-context learning might be OK. But updating the parameters themselves might develop a new skill in the model,” Damani says.

In the future, the researchers want to use these insights toward the development of models that continually learn.

The long-term goal is an LLM that, given a query, can automatically determine if it needs to use test-time training to update parameters or if it can solve the task using in-context learning, and then implement the best test-time training strategy without the need for human intervention.

This work is supported, in part, by the MIT-IBM Watson AI Lab and the National Science Foundation.


MIT chemists boost the efficiency of a key enzyme in photosynthesis

The enzyme, known as rubisco, helps plants and photosynthetic bacteria incorporate carbon dioxide into sugars.


During photosynthesis, an enzyme called rubisco catalyzes a key reaction — the incorporation of carbon dioxide into organic compounds to create sugars. However, rubisco, which is believed to be the most abundant enzyme on Earth, is very inefficient compared to the other enzymes involved in photosynthesis.

MIT chemists have now shown that they can greatly enhance a version of rubisco found in bacteria from a low-oxygen environment. Using a process known as directed evolution, they identified mutations that could boost rubisco’s catalytic efficiency by up to 25 percent.

The researchers now plan to apply their technique to forms of rubisco that could be used in plants to help boost their rates of photosynthesis, which could potentially improve crop yields.

“This is, I think, a compelling demonstration of successful improvement of a rubisco’s enzymatic properties, holding out a lot of hope for engineering other forms of rubisco,” says Matthew Shoulders, the Class of 1942 Professor of Chemistry at MIT.

Shoulders and Robert Wilson, a research scientist in the Department of Chemistry, are the senior authors of the new study, which appears this week in the Proceedings of the National Academy of Sciences. MIT graduate student Julie McDonald is the paper’s lead author.

Evolution of efficiency

When plants or photosynthetic bacteria absorb energy from the sun, they first convert it into energy-storing molecules such as ATP. In the next phase of photosynthesis, cells use that energy to transform a molecule known as ribulose bisphosphate into glucose, which requires several additional reactions. Rubisco catalyzes the first of those reactions, known as carboxylation. During that reaction, carbon from CO2 is added to ribulose bisphosphate.

Compared to the other enzymes involved in photosynthesis, rubisco is very slow, catalyzing only one to 10 reactions per second. Additionally, rubisco can also interact with oxygen, leading to a competing reaction that incorporates oxygen instead of carbon — a process that wastes some of the energy absorbed from sunlight.

“For protein engineers, that’s a really attractive set of problems because those traits seem like things that you could hopefully make better by making changes to the enzyme’s amino acid sequence,” McDonald says.

Previous research has led to improvement in rubisco’s stability and solubility, which resulted in small gains in enzyme efficiency. Most of those studies used directed evolution — a technique in which a naturally occurring protein is randomly mutated and then screened for the emergence of new, desirable features.

This process is usually done using error-prone PCR, a technique that first generates mutations in vitro (outside of the cell), typically introducing only one or two mutations in the target gene. In past studies on rubisco, this library of mutations was then introduced into bacteria that grow at a rate relative to rubisco activity. Limitations in error-prone PCR and in the efficiency of introducing new genes restrict the total number of mutations that can be generated and screened using this approach. Manual mutagenesis and selection steps also add more time to the process over multiple rounds of evolution.

The MIT team instead used a newer mutagenesis technique that the Shoulders Lab previously developed, called MutaT7. This technique allows the researchers to perform both mutagenesis and screening in living cells, which dramatically speeds up the process. Their technique also enables them to mutate the target gene at a higher rate.

“Our continuous directed evolution technique allows you to look at a lot more mutations in the enzyme than has been done in the past,” McDonald says.

Better rubisco

For this study, the researchers began with a version of rubisco, isolated from a family of semi-anaerobic bacteria known as Gallionellaceae, that is one of the fastest rubisco found in nature. During the directed evolution experiments, which were conducted in E. coli, the researchers kept the microbes in an environment with atmospheric levels of oxygen, creating evolutionary pressure to adapt to oxygen.

After six rounds of directed evolution, the researchers identified three different mutations that improved the rubisco’s resistance to oxygen. Each of these mutations are located near the enzyme’s active site (where it performs carboxylation or oxygenation). The researchers believe that these mutations improve the enzyme’s ability to preferentially interact with carbon dioxide over oxygen, which leads to an overall increase in carboxylation efficiency.

“The underlying question here is: Can you alter and improve the kinetic properties of rubisco to operate better in environments where you want it to operate better?” Shoulders says. “What changed through the directed evolution process was that rubisco began to like to react with oxygen less. That allows this rubisco to function well in an oxygen-rich environment, where normally it would constantly get distracted and react with oxygen, which you don’t want it to do.”

In ongoing work, the researchers are applying this approach to other forms of rubisco, including rubisco from plants. Plants are believed to lose about 30 percent of the energy from the sunlight they absorb through a process called photorespiration, which occurs when rubisco acts on oxygen instead of carbon dioxide.

“This really opens the door to a lot of exciting new research, and it’s a step beyond the types of engineering that have dominated rubisco engineering in the past,” Wilson says. “There are definite benefits to agricultural productivity that could be leveraged through a better rubisco.”

The research was funded, in part, by the National Science Foundation, the National Institutes of Health, an Abdul Latif Jameel Water and Food Systems Lab Grand Challenge grant, and a Martin Family Society Fellowship for Sustainability.


Professor Emeritus Barry Vercoe, a pioneering force in computer music, dies at 87

Widely known for his Synthetic Performer, Csound language, and work on the MPEG-4 audio standard, Vercoe positioned MIT as a hub for music technology through leadership roles with the Media Lab and Music and Theater Arts Section.


MIT Professor Emeritus Barry Lloyd Vercoe, a pioneering force in computer music, a founding faculty member of the MIT Media Lab, and a leader in the development of MIT’s Music and Theater Arts Section, passed away on June 15. He was 87.

Vercoe’s life was a rich symphony of artistry, science, and innovation that led to profound enhancements of musical experience for expert musicians as well as for the general public — and especially young people.

Born in Wellington, New Zealand, on July 24, 1937, Vercoe earned bachelor’s degrees in music (in 1959) and mathematics (in 1962) from the University of Auckland, followed by a doctor of musical arts in music composition from the University of Michigan in 1968.

After completing postdoctoral research in digital audio processing at Princeton University and a visiting lectureship at Yale University, Vercoe joined MIT’s Department of Humanities (Music) in 1971, beginning a tenure in the department that lasted through 1984. During this period, he played a key role in advancing what would become MIT’s Music and Theater Arts (MTA) Section, helping to shape its forward-thinking curriculum and interdisciplinary philosophy. Vercoe championed the integration of musical creativity with scientific inquiry, laying the groundwork for MTA’s enduring emphasis on music technology and experimental composition.

In 1973, Vercoe founded MIT’s Experimental Music Studio (EMS) — the Institute’s first dedicated computer music facility, and one of the first in the world. Operated under the auspices of the music program, EMS became a crucible for innovation in algorithmic composition, digital synthesis, and computer-assisted performance. His leadership not only positioned MIT as a hub for music technology, but also influenced how the Institute approached the intersection of the arts with engineering. This legacy is honored today by a commemorative plaque in the Kendall Square MBTA station.

Violist, faculty founder of the MIT Chamber Music Society, and Institute Professor Marcus Thompson says: “Barry was first and foremost a fine musician, and composer for traditional instruments and ensembles. As a young professor, he taught our MIT undergraduates to write and sing Renaissance counterpoint as he envisioned how the act of traditional music-making offered a guide to potential artistic interaction between humans and computers. In 1976, he enlisted me to premiere what became his iconic, and my most-performed, work, ‘Synapse for Viola and Computer.’”

During a Guggenheim Fellowship in 1982–83, Vercoe developed the Synthetic Performer, a groundbreaking real-time interactive accompaniment system, while working closely with flautist Larry Beauregard at the Institute for Research and Coordination in Acoustics/Music (IRCAM) in Paris.

In 1984, Vercoe became a founding faculty member of the MIT Media Lab, where he launched the Music, Mind, and Machine group. His research spanned machine listening, music cognition, and real-time digital audio synthesis. His Csound language, created in 1985, is still widely used for music programming, and his contributions helped define the MPEG-4 Structured Audio standard.

He also served as associate academic head of the Media Lab’s graduate program in Media Arts and Sciences (MAS). Vercoe mentored many future leaders in digital music and sound computation, including two of his MAS graduate students — Anna Huang SM ’08 and Paris Smaragdis PhD ’01 — who have recently joined MIT’s music faculty, and Miller Puckette, an emeritus faculty member at the University of California at San Diego, and Richard Boulanger, a professor of electronic production and design at the Berklee College of Music.

“Barry Vercoe will be remembered by designers, developers, researchers, and composers for his greatest ‘composition,’ Csound, his free and open-source software synthesis language,” states Boulanger. “I know that, through Csound, Barry’s musical spirit will live on, not only in my teaching, my research, and my music, but in the apps, plugins, and musical compositions of generations to come.”

Tod Machover, faculty director of the MIT Media Lab and Muriel R. Cooper Professor of Music and Media, reflects, “Barry Vercoe was a giant in the field of computer music whose innovations in software synthesis, interactive performance, and educational tools for young people influenced and inspired many, including myself. He was a superb mentor, always making sure that artistic sensibility drove music tech innovation, and that sophisticated expression was at the core of Media Lab — and MIT — culture.”

Vercoe’s work earned numerous accolades. In addition to the Guggenheim Fellowship, he was also honored with the 1992 Computerworld Smithsonian Award for innovation and the 2004 SEAMUS Lifetime Achievement Award.

Beyond MIT, Vercoe consulted with Analog Devices and collaborated with international institutions like IRCAM under the direction of Pierre Boulez. His commitment to democratizing music technology was evident in his contributions to the One Laptop per Child initiative, which brought accessible digital sound tools to young people in underserved communities worldwide.

He is survived by his former wives, Kathryn Veda Vaughn and Elizabeth Vercoe; their children, Andrea Vercoe and Scott Vercoe; and generations of students and collaborators who continue to build on his groundbreaking work. A memorial service for family will be held in New Zealand later this summer, and a special event in his honor will take place at MIT in the fall. The Media Lab will share details about the MIT gathering as they become available.

Named professor emeritus at the MIT Media Lab upon his retirement in 2010, Vercoe’s legacy embodies the lab’s — and MIT’s — vision of creative, ethical, interdisciplinary research at the convergence of art, science, and technology. His music, machines, and generously inventive spirit will continue to forever shape the way we listen, learn, and communicate.


New postdoctoral fellowship program to accelerate innovation in health care

Launched with a gift from the Biswas Family Foundation, the Biswas Postdoctoral Fellowship Program will support postdocs in health and life sciences.


The MIT Health and Life Sciences Collaborative (MIT HEALS) is launching the Biswas Postdoctoral Fellowship Program to advance the work of outstanding early-career researchers in health and life sciences. Supported by a gift from the Biswas Family Foundation, the program aims to help apply cutting-edge research to improve health care and the lives of millions.

The program will support exceptional postdocs dedicated to innovation in human health care through a full range of pathways, such as leveraging AI in health-related research, developing low-cost diagnostics, and the convergence of life sciences with such areas as economics, business, policy, or the humanities. With initial funding of $12 million, five four-year fellowships will be awarded for each of the next four years, starting in early 2026.

“An essential goal of MIT HEALS is to find new ways and opportunities to deliver health care solutions at scale, and the Biswas Family Foundation shares our commitment to scalable innovation and broad impact. MIT is also in the talent business, and the foundation’s gift allows us to bring exceptional scholars to campus to explore some of the most pressing issues in human health and build meaningful connections across academia and industry. We look forward to welcoming the first cohort of Biswas Fellows to MIT,” says MIT president Sally Kornbluth.

“We are deeply honored to launch this world-class postdoctoral fellows program,” adds Anantha P. Chandrakasan, MIT’s chief innovation and strategy officer and head of MIT HEALS. “We fully expect to attract top candidates from around the globe to lead innovative cross-cutting projects in AI and health, cancer therapies, diagnostics, and beyond. These fellows will be selected through a rigorous process overseen by a distinguished committee, and will have the opportunity to collaborate with our faculty on the most promising and impactful ideas.”

Angela Koehler, faculty lead of MIT HEALS, professor in MIT’s Department of Biological Engineering, and associate director of the Koch Institute for Integrative Cancer Research, emphasized that the objectives of MIT HEALS align well with a stated goal of the Biswas Family Foundation: to leverage “scientific and technological advancements to revolutionize health care and make a lasting impact on global public health.”

“Health care is a team sport,” Koehler says. “MIT HEALS seeks to create connections involving investigators with diverse expertise across the Institute to tackle the most transformative problems impacting human health. Members of the MIT community are well poised to participate in teams and make an impact.”

MIT HEALS also seeks to maximize its effectiveness by expanding collaboration with medical schools and hospitals, starting with defining important problems that can be approached through research, and continuing all the way to clinical studies, Koehler says.

The Biswas Family Foundation has already demonstrated a similar strategy.

“The Biswas family has a history of enabling connections and partnerships between institutions that each bring a piece to the puzzle,” Koehler says. “This could be a dataset, an algorithm, an agent, a technology platform, or patients.”

Hope Biswas, co-founder of the Biswas Family Foundation with her husband, MIT alumnus Sanjit Biswas SM ’05, also highlighted the synergies between the foundation and MIT.

“The Biswas Family Foundation is proud to support the MIT HEALS initiative, which reimagines how scientific discovery can translate into real-world health impact. Its focus on promoting interdisciplinary collaboration to find new solutions to challenges in health care aligns closely with our mission to advance science and technology to improve health outcomes at scale,” Biswas says.

“As part of this commitment,” Biswas adds, “we are especially proud to support outstanding postdoctoral scholars focused on high-impact cross-disciplinary work in fields such as computational biology, nanoscale therapeutics, women’s health, and fundamental, curiosity-driven life sciences research. We are excited to contribute to an effort that brings together cutting-edge science and a deep commitment to translating knowledge into action.”

AI and machine-learning systems present a new universe of opportunities to investigate disease, biological mechanisms, therapeutics, and health care delivery using huge datasets.

“AI and computational systems biology can improve the accuracy of diagnostic approaches, enable the development of precision medicines, improve choices related to individualized treatment strategy, and improve operational efficiency within health care systems,” says Koehler. “Sanjit and Hope’s support of broad initiatives in AI and computational systems biology will help MIT researchers explore a variety of paths to impact human health on a large scale.”

Frontiers in health-related research are increasingly found where diverse fields converge, and Koehler provides the example of how advances in high-throughput experimentation to develop large datasets “may couple well with the development of new computation or AI tools.” She adds that the four-year funding term provided by the postdoctoral fellowship is “long enough to enable fellows to think big and take on projects at interfaces, emerging as bilingual researchers at the end of the program.”

Chandrakasan sees potential in the program for the Biswas Fellows to make revolutionary progress in health research.

“I’m incredibly grateful to the Biswas Family Foundation for their generous support in enabling transformative research at MIT,” Chandrakasan says.


Exploring data and its influence on political behavior

In MIT's course 17.831 (Data and Politics), students are introduced to the power of analysis, visualization, and research-supported insight into political outcomes.


Data and politics are becoming increasingly intertwined. Today’s political campaigns and voter mobilization efforts are now entirely data-driven. Voters, pollsters, and elected officials are relying on data to make choices that have local, regional, and national impacts.

A Department of Political Science course offers students tools to help make sense of these choices and their outcomes.

In class 17.831 (Data and Politics), students are introduced to principles and practices necessary to understand electoral and other types of political behavior. Taught by associate professor of political science Daniel Hidalgo, students use real-world datasets to explore topics like election polling and prediction, voter turnout, voter targeting, and shifts in public opinion over time.

The course wants students to describe why and how the use of data and statistical methods has changed electoral politics, understand the basic principles of social science statistics, and analyze data using modern statistical computing tools. The course capstone is an original project that involves the collection, analysis, and interpretation of original survey data used in modern campaigns.

“I wanted to create an applied, practice-based course that would appeal to undergraduates and provide a foundation for parsing, understanding, and reporting on large datasets in politics,” says Hidalgo, who redesigned the course for the spring 2025 semester.

Hidalgo, who also works in the Political Methodology Lab at MIT, investigates the political economy of elections, campaigns, and representation in developing democracies, especially in Latin America, as well as quantitative methods in the social sciences.

Politics and modernity

The influence of, and access to, artificial intelligence and large language models makes a course like Data and Politics even more important, Hidalgo says. “You have to understand the people at the other end of the data,” he argues.

The course also centers the human element in politics, exploring conflict, bias, their structures, and impacts while also working to improve information literacy and coherent storytelling.

“Data analysis and collection will never be perfect,” Hidalgo says. “But analyzing and understanding who holds which ideas, and why, and using the information to tell a coherent story is valuable in politics and elsewhere.”

The “always on” nature of news and related content, coupled with the variety of communications channels available to voters, has increased the complexity of the data collection process in polling and campaigns. “In the past, people would answer the phone when you called their homes,” Hidalgo notes, describing analog methods previously used to collect voter data. Now, political scientists, data analysts, and others must contend with the availability of streaming content, mobile devices, and other channels comprising a vast, fractured media ecosystem.

The course opens a window into what happens behind the scenes of local and national political campaigns, which appealed to second-year political science major Jackson Hamilton. “I took this class hoping to expand my ability to use coding for political science applications, and in order to better understand how political models and predictions work,” he says.

“We tailor-made our own sets of questions and experimental designs that we thought would be interesting,” Hamilton adds. “I found that political issues that get a lot of media coverage are not necessarily the same issues which divide lawmakers, at least locally.”

Transparency and accountability in politics and other areas

Teaching students to use tools like polling and data analysis effectively can improve their ability to identify and combat disinformation and misinformation. “As a political scientist, I’m substantively engaged,” Hidalgo says, “and I’d like to help others be engaged, too.”

“There’s lots of data available, and this course provides a foundation and the resources necessary to understand and visualize it,” Hidalgo continues. “The ability to design, implement, and understand surveys has value inside and outside the classroom.”

In politics, Hidalgo believes equipping students to navigate these spaces effectively can potentially improve and increase civic engagement. Data, he says, can help defend ideas. “There’s so much information, it’s important to develop the skills and abilities necessary to understand and visualize it,” he says. “This has value for everyone.”

Second-year physics major Sean Wilson, who also took the class this spring, notes the value of data visualization and analysis both as a potential physicist and a voter. “Data analysis in both politics and in physics is essential work given that voting tendencies, public opinion, and government leadership change so often in the United States,” he says, “and that modeling can be used to support physical hypotheses and improve our understanding of how things work.”

For Wilson, the course can help anyone interested in understanding large groups’ behaviors. “Political scientists are constantly working to better understand how and why certain events occur in U.S. politics, and data analysis is an effective tool for doing so,” he says. “Members of a representative democracy can make better decisions with this kind of information.”

Hamilton, meanwhile, learned more about the behind-the-scenes machinery at work in electoral politics. “I had the opportunity to create a couple of budget trade-off questions, to get a sense of what people actually thought the government should spend money on when they had to make choices,” he says.

“Computer science and data science aren’t just useful for STEM applications; data science approaches can also be extremely useful in many social sciences,” Hamilton argues.

“[Hidalgo helped me realize] that I needed to understand and use data science approaches to gain a deeper understanding of my areas of interest,” Hamilton says. “He focuses on how different approaches in coding can be applied to different types of problems in political science.” 


Study shows how a common fertilizer ingredient benefits plants

The findings could enable new ways to increase plants’ resilience to UV stress and enhance seedling growth.


Lanthanides are a class of rare earth elements that in many countries are added to fertilizer as micronutrients to stimulate plant growth. But little is known about how they are absorbed by plants or influence photosynthesis, potentially leaving their benefits untapped.

Now, researchers from MIT have shed light on how lanthanides move through and operate within plants. These insights could help farmers optimize their use to grow some of the world’s most popular crops.

Published today in the Journal of the American Chemical Society, the study shows that a single nanoscale dose of lanthanides applied to seeds can make some of the world’s most common crops more resilient to UV stress. The researchers also uncovered the chemical processes by which lanthanides interact with the chlorophyll pigments that drive photosynthesis, showing that different lanthanide elements strengthen chlorophyll by replacing the magnesium at its center.

“This is a first step to better understand how these elements work in plants, and to provide an example of how they could be better delivered to plants, compared to simply applying them in the soil,” says Associate Professor Benedetto Marelli, who conducted the research with postdoc Giorgio Rizzo. “This is the first example of a thorough study showing the effects of lanthanides on chlorophyll, and their beneficial effects to protect plants from UV stress.”

Inside plant connections

Certain lanthanides are used as contrast agents in MRI and for applications including light-emitting diodes, solar cells, and lasers. Over the last 50 years, lanthanides have become increasingly used in agriculture to enhance crop yields, with China alone applying lanthanide-based fertilizers to nearly 4 million hectares of land each year.

“Lanthanides have been considered for a long time to be biologically irrelevant, but that’s changed in agriculture, especially in China,” says Rizzo, the paper’s first author. “But we largely don’t know how lanthanides work to benefit plants — nor do we understand their uptake mechanisms from plant tissues.”

Recent studies have shown that low concentrations of lanthanides can promote plant growth, root elongation, hormone synthesis, and stress tolerance, but higher doses can cause harm to plants. Striking the right balance has been hard because of our lack of understanding around how lanthanides are absorbed by plants or how they interact with root soil.

For the study, the researchers leveraged seed coating and treatment technologies they previously developed to investigate the way the plant pigment chlorophyll interacts with lanthanides, both inside and outside of plants. Up until now, researchers haven’t been sure whether chlorophyll interacts with lanthanide ions at all.

Chlorophyll drives photosynthesis, but the pigments lose their ability to efficiently absorb light when the magnesium ion at their core is removed. The researchers discovered that lanthanides can fill that void, helping chlorophyll pigments partially recover some of their optical properties in a process known as re-greening.

“We found that lanthanides can boost several parameters of plant health,” Marelli says. “They mostly accumulate in the roots, but a small amount also makes its way to the leaves, and some of the new chlorophyll molecules made in leaves have lanthanides incorporated in their structure.”

This study also offers the first experimental evidence that lanthanides can increase plant resilience to UV stress, something the researchers say was completely unexpected.

“Chlorophylls are very sensitive pigments,” Rizzo says. “They can convert light to energy in plants, but when they are isolated from the cell structure, they rapidly hydrolyze and degrade. However, in the form with lanthanides at their center, they are pretty stable, even after extracting them from plant cells.”

The researchers, using different spectroscopic techniques, found the benefits held across a range of staple crops, including chickpea, barley, corn, and soybeans.

The findings could be used to boost crop yield and increase the resilience of some of the world’s most popular crops to extreme weather.

“As we move into an environment where extreme heat and extreme climate events are more common, and particularly where we can have prolonged periods of sun in the field, we want to provide new ways to protect our plants,” Marelli says. “There are existing agrochemicals that can be applied to leaves for protecting plants from stressors such as UV, but they can be toxic, increase microplastics, and can require multiple applications. This could be a complementary way to protect plants from UV stress.”

Identifying new applications

The researchers also found that larger lanthanide elements like lanthanum were more effective at strengthening chlorophyll pigments than smaller ones. Lanthanum is considered a low-value byproduct of rare earths mining, and can become a burden to the rare earth element (REE) supply chain due to the need to separate it from more desirable rare earths. Increasing the demand for lanthanum could diversify the economics of REEs and improve the stability of their supply chain, the scientists suggest.

“This study shows what we could do with these lower-value metals,” Marelli says. “We know lanthanides are extremely useful in electronics, magnets, and energy. In the U.S., there’s a big push to recycle them. That’s why for the plant studies, we focused on lanthanum, being the most abundant, cheapest lanthanide ion.”

Moving forward, the team plans to explore how lanthanides work with other biological molecules, including proteins in the human body.

In agriculture, the team hopes to scale up its research to include field and greenhouse studies to continue testing the results of UV resilience on different crop types and in experimental farm conditions.

“Lanthanides are already widely used in agriculture,” Rizzo says. “We hope this study provides evidence that allows more conscious use of them and also a new way to apply them through seed treatments.”

The research was supported by the MIT Climate Grand Challenge and the Office for Naval Research.


Robotic probe quickly measures key properties of new materials

Developed to analyze new semiconductors, the system could streamline the development of more powerful solar panels.


Scientists are striving to discover new semiconductor materials that could boost the efficiency of solar cells and other electronics. But the pace of innovation is bottlenecked by the speed at which researchers can manually measure important material properties.

A fully autonomous robotic system developed by MIT researchers could speed things up.

Their system utilizes a robotic probe to measure an important electrical property known as photoconductance, which is how electrically responsive a material is to the presence of light.

The researchers inject materials-science-domain knowledge from human experts into the machine-learning model that guides the robot’s decision making. This enables the robot to identify the best places to contact a material with the probe to gain the most information about its photoconductance, while a specialized planning procedure finds the fastest way to move between contact points.

During a 24-hour test, the fully autonomous robotic probe took more than 125 unique measurements per hour, with more precision and reliability than other artificial intelligence-based methods.

By dramatically increasing the speed at which scientists can characterize important properties of new semiconductor materials, this method could spur the development of solar panels that produce more electricity.

“I find this paper to be incredibly exciting because it provides a pathway for autonomous, contact-based characterization methods. Not every important property of a material can be measured in a contactless way. If you need to make contact with your sample, you want it to be fast and you want to maximize the amount of information that you gain,” says Tonio Buonassisi, professor of mechanical engineering and senior author of a paper on the autonomous system.

His co-authors include lead author Alexander (Aleks) Siemenn, a graduate student; postdocs Basita Das and Kangyu Ji; and graduate student Fang Sheng. The work appears today in Science Advances.

Making contact

Since 2018, researchers in Buonassisi’s laboratory have been working toward a fully autonomous materials discovery laboratory. They’ve recently focused on discovering new perovskites, which are a class of semiconductor materials used in photovoltaics like solar panels.

In prior work, they developed techniques to rapidly synthesize and print unique combinations of perovskite material. They also designed imaging-based methods to determine some important material properties.

But photoconductance is most accurately characterized by placing a probe onto the material, shining a light, and measuring the electrical response.

“To allow our experimental laboratory to operate as quickly and accurately as possible, we had to come up with a solution that would produce the best measurements while minimizing the time it takes to run the whole procedure,” says Siemenn.

Doing so required the integration of machine learning, robotics, and material science into one autonomous system.

To begin, the robotic system uses its onboard camera to take an image of a slide with perovskite material printed on it.

Then it uses computer vision to cut that image into segments, which are fed into a neural network model that has been specially designed to incorporate domain expertise from chemists and materials scientists.

“These robots can improve the repeatability and precision of our operations, but it is important to still have a human in the loop. If we don’t have a good way to implement the rich knowledge from these chemical experts into our robots, we are not going to be able to discover new materials,” Siemenn adds.

The model uses this domain knowledge to determine the optimal points for the probe to contact based on the shape of the sample and its material composition. These contact points are fed into a path planner that finds the most efficient way for the probe to reach all points.

The adaptability of this machine-learning approach is especially important because the printed samples have unique shapes, from circular drops to jellybean-like structures.

“It is almost like measuring snowflakes — it is difficult to get two that are identical,” Buonassisi says.

Once the path planner finds the shortest path, it sends signals to the robot’s motors, which manipulate the probe and take measurements at each contact point in rapid succession.

Key to the speed of this approach is the self-supervised nature of the neural network model. The model determines optimal contact points directly on a sample image — without the need for labeled training data.

The researchers also accelerated the system by enhancing the path planning procedure. They found that adding a small amount of noise, or randomness, to the algorithm helped it find the shortest path.

“As we progress in this age of autonomous labs, you really do need all three of these expertise — hardware building, software, and an understanding of materials science — coming together into the same team to be able to innovate quickly. And that is part of the secret sauce here,” Buonassisi says.

Rich data, rapid results

Once they had built the system from the ground up, the researchers tested each component. Their results showed that the neural network model found better contact points with less computation time than seven other AI-based methods. In addition, the path planning algorithm consistently found shorter path plans than other methods.

When they put all the pieces together to conduct a 24-hour fully autonomous experiment, the robotic system conducted more than 3,000 unique photoconductance measurements at a rate exceeding 125 per hour.

In addition, the level of detail provided by this precise measurement approach enabled the researchers to identify hotspots with higher photoconductance as well as areas of material degradation.

“Being able to gather such rich data that can be captured at such fast rates, without the need for human guidance, starts to open up doors to be able to discover and develop new high-performance semiconductors, especially for sustainability applications like solar panels,” Siemenn says.

The researchers want to continue building on this robotic system as they strive to create a fully autonomous lab for materials discovery.

This work is supported, in part, by First Solar, Eni through the MIT Energy Initiative, MathWorks, the University of Toronto’s Acceleration Consortium, the U.S. Department of Energy, and the U.S. National Science Foundation.


Study: Babies’ poor vision may help organize visual brain pathways

MIT researchers found that low-quality visual input early in life may contribute to the development of key pathways in the brain’s visual system.


Incoming information from the retina is channeled into two pathways in the brain’s visual system: one that’s responsible for processing color and fine spatial detail, and another that’s involved in spatial localization and detecting high temporal frequencies. A new study from MIT provides an account for how these two pathways may be shaped by developmental factors.

Newborns typically have poor visual acuity and poor color vision because their retinal cone cells are not well-developed at birth. This means that early in life, they are seeing blurry, color-reduced imagery. The MIT team proposes that such blurry, color-limited vision may result in some brain cells specializing in low spatial frequencies and low color tuning, corresponding to the so-called magnocellular system. Later, with improved vision, cells may tune to finer details and richer color, consistent with the other pathway, known as the parvocellular system.

To test their hypothesis, the researchers trained computational models of vision on a trajectory of input similar to what human babies receive early in life — low-quality images early on, followed by full-color, sharper images later. They found that these models developed processing units with receptive fields exhibiting some similarity to the division of magnocellular and parvocellular pathways in the human visual system. Vision models trained on only high-quality images did not develop such distinct characteristics.

“The findings potentially suggest a mechanistic account of the emergence of the parvo/magno distinction, which is one of the key organizing principles of the visual pathway in the mammalian brain,” says Pawan Sinha, an MIT professor of brain and cognitive sciences and the senior author of the study.

MIT postdocs Marin Vogelsang and Lukas Vogelsang are the lead authors of the study, which appears today in the journal Communications Biology. Sidney Diamond, an MIT research affiliate, and Gordon Pipa, a professor of neuroinformatics at the University of Osnabrueck, are also authors of the paper.

Sensory input

The idea that low-quality visual input might be beneficial for development grew out of studies of children who were born blind but later had their sight restored. An effort from Sinha’s laboratory, Project Prakash, has screened and treated thousands of children in India, where reversible forms of vision loss such as cataracts are relatively common. After their sight is restored, many of these children volunteer to participate in studies in which Sinha and his colleagues track their visual development.

In one of these studies, the researchers found that children who had cataracts removed exhibited a marked drop in object-recognition performance when the children were presented with black and white images, compared to colored ones. Those findings led the researchers to hypothesize that reduced color input characteristic of early typical development, far from being a hindrance, allows the brain to learn to recognize objects even in images that have impoverished or shifted colors.

“Denying access to rich color at the outset seems to be a powerful strategy to build in resilience to color changes and make the system more robust against color loss in images,” Sinha says.

In that study, the researchers also found that when computational models of vision were initially trained on grayscale images, followed by color images, their ability to recognize objects was more robust than that of models trained only on color images. Similarly, another study from the lab found that models performed better when they were trained first on blurry images, followed by sharper images.

To build on those findings, the MIT team wanted to explore what might be the consequences of both of those features — color and visual acuity — being limited at the outset of development. They hypothesized that these limitations might contribute to the development of the magnocellular and parvocellular pathways.

In addition to being highly attuned to color, cells in the parvocellular pathway have small receptive fields, meaning that they receive input from more compact clusters of retinal ganglion cells. This helps them to process fine detail. Cells in the magnocellular pathway pool information across larger areas, allowing them to process more global spatial information.

To test their hypothesis that developmental progressions could contribute to the magno and parvo cell selectivities, the researchers trained models on two different sets of images. One model was presented with a standard dataset of images that are used to train models to categorize objects. The other dataset was designed to roughly mimic the input that the human visual system receives from birth. This “biomimetic” data consists of low-resolution, grayscale images in the first half of the training, followed by high-resolution, colorful images in the second half.

After the models were trained, the researchers analyzed the models’ processing units — nodes within the network that bear some resemblance to the clusters of cells that process visual information in the brain. They found that the models trained on the biomimetic data developed a distinct subset of units that are jointly responsive to low-color and low-spatial-frequency inputs, similar to the magnocellular pathway. Additionally, these biomimetic models exhibited groups of more heterogenous parvocellular-like units tuned predominantly to higher spatial frequencies or richer color signals. Such distinction did not emerge in the models trained on full color, high-resolution images from the start.

“This provides some support for the idea that the ‘correlation’ we see in the biological system could be a consequence of the types of inputs that are available at the same time in normal development,” Lukas Vogelsang says.

Object recognition

The researchers also performed additional tests to reveal what strategies the differently trained models were using for object recognition tasks. In one, they asked the models to categorize images of objects where the shape and texture did not match — for example, an animal with the shape of cat but the texture of an elephant.

This is a technique several researchers in the field have employed to determine which image attributes a model is using to categorize objects: the overall shape or the fine-grained textures. The MIT team found that models trained on biomimetic input were markedly more likely to use an object’s shape to make those decisions, just as humans usually do. Moreover, when the researchers systematically removed the magnocellular-like units from the models, the models quickly lost their tendency to use shape to make categorizations.

In another set of experiments, the researchers trained the models on videos instead of images, which introduces a temporal dimension. In addition to low spatial resolution and color sensitivity, the magnocellular pathway responds to high temporal frequencies, allowing it to quickly detect changes in the position of an object. When models were trained on biomimetic video input, the units most tuned to high temporal frequencies were indeed the ones that also exhibited magnocellular-like properties in the spatial domain.

Overall, the results support the idea that low-quality sensory input early in life may contribute to the organization of sensory processing pathways of the brain, the researchers say. The findings do not rule out innate specification of the magno and parvo pathways, but provide a proof of principle that visual experience over the course of development could also play a role.

“The general theme that seems to be emerging is that the developmental progression that we go through is very carefully structured in order to give us certain kinds of perceptual proficiencies, and it may also have consequences in terms of the very organization of the brain,” Sinha says.

The research was funded by the National Institutes of Health, the Simons Center for the Social Brain, the Japan Society for the Promotion of Science, and the Yamada Science Foundation.


A new platform for developing advanced metals at scale

Foundation Alloy, founded by a team from MIT, uses solid-state metallurgy technology to create a new class of high-performance metals.


Companies building next-generation products are often limited by the physical constraints of traditional materials. In aerospace, defense, energy, and industrial tooling, pushing those constraints introduces possible failure points into the system. Unfortunately, companies don’t have better options, given that producing new materials at scale involves multiyear timelines and huge expenses.

Foundation Alloy wants to break the mold. The company, founded by a team from MIT, is capable of producing a new class of ultra-high-performance metal alloys using a novel production process that doesn’t rely on melting raw materials. The company’s solid-state metallurgy technology, which simplifies development and manufacturing of next-generation alloys, was developed over many years of research by former MIT professor Chris Schuh and collaborators.

“This is an entirely new approach to making metals,” says CEO Jake Guglin MBA ’19, who co-founded Foundation Alloy with Schuh, Jasper Lienhard ’15, PhD ’22, and Tim Rupert PhD ’11. “It gives us a broad set of rules on the materials engineering side that allows us to design a lot of different compositions with previously unattainable properties. We use that to make products that work better for advanced industrial applications.”

Foundation Alloy says its metal alloys can be made twice as strong as traditional metals, with 10 times faster product development, allowing companies to test, iterate, and deploy new metals into products in months instead of years.

The company is already designing metals and shipping demonstration parts to companies manufacturing components for things like planes, bikes, and cars. It’s also making test parts for partners in industries with longer development cycles, such as defense and aerospace.

Moving forward, the company believes its approach enables companies to build higher-performing, more reliable systems, from rockets to cars, nuclear fusion reactors, and artificial intelligence chips.

“For advanced systems like rocket and jet engines, if you can run them hotter, you can get more efficient use of fuel and a more powerful system,” Guglin says. “The limiting factor is whether or not you have structural integrity at those higher temperatures, and that is fundamentally a materials problem. Right now, we’re also doing a lot of work in advanced manufacturing and tooling, which is the unsexy but super-critical backbone of the industrial world, where being able to push properties up without multiplying costs can unlock efficiencies in operations, performance, and capacity, all in a way that’s only possible with different materials.”

From MIT to the world

Schuh joined MIT’s faculty in 2002 to study the processing, structure, and properties of metal and other materials. He was named head of the Department of Materials Science and Engineering in 2011 before becoming dean of engineering at Northwestern University in 2023, after more than 20 years at MIT.

“Chris wanted to look at metals from different perspectives and make things more economically efficient and higher performance than what’s possible with traditional processes,” Guglin says. “It wasn’t just for academic papers — it was about making new methods that would be valuable for the industrial world.”

Rupert and Lienhard conducted their PhDs in Schuh’s lab, and Rupert invented complementary technologies to the solid-state processes developed by Schuh and his collaborators as a professor at the University of California at Irvine.

Guglin came to MIT’s Sloan School of Management in 2017 eager to work with high-impact technologies.

“I wanted to go somewhere where I could find the types of fundamental technological breakthroughs that create asymmetric value — the types of things where if they didn’t happen here, they weren’t going to happen anywhere else,” Guglin recalls.

In one of his classes, a PhD student in Schuh’s lab practiced his thesis defense by describing his research on a new way to create metal alloys.

“I didn’t understand any of it — I have a philosophy background,” Guglin says. “But I heard ‘stronger metals’ and I saw the potential of this incredible platform Chris’ lab was working on, and it tied into exactly why I wanted to come to MIT.”

Guglin connected with Schuh, and the pair stayed in touch over the next several years as Guglin graduated and went to work for aerospace companies SpaceX and Blue Origin, where he saw firsthand the problems being caused by the metal parts supply chain.

In 2022, the pair finally decided to launch a company, adding Rupert and Lienhard and licensing technology from MIT and UC Irvine.

The founders’ first challenge was scaling up the technology.

“There’s a lot of process engineering to go from doing something once at 5 grams to doing it 100 times a week at 100 kilograms per batch,” Guglin says.

Today, Foundation Alloys starts with its customers’ material requirements and decides on a precise mixture of the powdered raw materials that every metal starts out as. From there, it uses a specialized industrial mixer — Guglin calls it an industrial KitchenAid blender — to create a metal powder that is homogenous down to the atomic level.

“In our process, from raw material all the way through to the final part, we never melt the metal,” Guglin says. “That is uncommon if not unknown in traditional metal manufacturing.

From there, the company’s material can be solidified using traditional methods like metal injection molding, pressing, or 3D printing. The final step is sintering in a furnace.

“We also do a lot of work around how the metal reacts in the sintering furnace,” Guglin says. “Our materials are specifically designed to sinter at relatively low temperatures, relatively quickly, and all the way to full density.”

The advanced sintering process uses an order of magnitude less heat, saving on costs while allowing the company to forego secondary processes for quality control. It also gives Foundation Alloy more control over the microstructure of the final parts.

“That’s where we get a lot of our performance boost from,” Guglin says. “And by not needing those secondary processing steps, we’re saving days if not weeks in addition to the costs and energy savings.”

A foundation for industry

Foundation Alloy is currently piloting their metals across the industrial base and has also received grants to develop parts for critical components of nuclear fusion reactors.

“The name Foundation Alloy in a lot of ways came from wanting to be the foundation for the next generation of industry,” Guglin says.

Unlike in traditional metals manufacturing, where new alloys require huge investments to scale, Guglin says the company’s process for developing new alloys is nearly the same as its production processes, allowing it to scale new materials production far more quickly.

“At the core of our approach is looking at problems like material scientists with a new technology,” Guglin says. “We’re not beholden to the idea that this type of steel must solve this type of problem. We try to understand why that steel is failing and then use our technology to solve the problem in a way that produces not a 10 percent improvement, but a two- or five-times improvement in terms of performance.”


Study finds better services dramatically help children in foster care

A Chilean experiment with legal aid and social services cuts time in foster care, with lasting effects for kids and lower costs for programs.


Being placed in foster care is a necessary intervention for some children. But many advocates worry that kids can languish in foster care too long, with harmful effects for children who are temporarily unattached from a permanent family.

A new study co-authored by an MIT economist shows that an innovative Chilean program providing legal aid to children shortens the length of foster-care stays, returning them to families faster. In the process, it improves long-term social outcomes for kids and even reduces government spending on the foster care system.

“It was amazingly successful because the program got kids out of foster care about 30 percent faster,” says Joseph Doyle, an economist at the MIT Sloan School of Management, who helped lead the research. “Because foster care is expensive, that paid for the program by itself about four times over. If you improve the case management of kids in foster care, you can improve a child’s well-being and save money.”

The paper, “Effects of Enhanced Legal Aid in Child Welfare: Evidence from a Randomized Trial of Mi Abogado,” is published in the American Economic Review.

The authors are Ryan Cooper, a professor and director of government innovation at the University of Chicago; Doyle, who is the Erwin H. Schell Professor of Management at MIT Sloan; and Andrés P. Hojman, a professor at the Pontifical Catholic University of Chile.

Rigorous design

To conduct the study, the scholars examined the Chilean government’s new program “Mi Abogado” — meaning, “My Lawyer” — which provided enhanced legal support to children in foster care, as well as access to psychologists and social workers. Legal advocates in the program were given a reduced caseload, for one thing, to help them focus further on each individual case.

Chile introduced Mi Abogado in 2017, with a feature that made it ripe for careful study: The program randomizes most of the participants selected, as part of how it was rolled out. From the pool of children in the foster care system, randomly being part of the program makes it easier to identify its causal impact on later outcomes.

“Very few foster-care redesigns are evaluated in such a rigorous way, and we need more of this innovative approach to policy improvement,” Doyle notes.

The experiment included 1,781 children who were in Chile’s foster care program in 2019, with 581 selected for the Mi Abogado services; it tracked their trajectories over more than two years. Almost all the participants were in group foster-care homes.

In addition to reduced time spent in foster care, the Chilean data showed that children in the Mi Abogado program had a subsequent 30 percent reduction in terms of contact with the criminal justice system and a 5 percent increase in school attendance, compared to children in foster care who did not participate in the program.

“They were getting involved with crime less and attending school more,” Doyle says.

As powerful as the results appear, Doyle acknowledges that he would like to be able to analyze further which elements of the Mi Abogado program had the biggest impact — legal help, counseling and therapy, or other factors.

“We would like to see more about what exactly they are doing for children to speed their exit from care,” Doyle says. “Is it mostly about therapy? Is it working with judges and cutting through red tape? We think the lawyer is a very important part. But the results suggest it is not just the lawyer that improves outcomes.”

More programs in other places?

The current paper is one of many studies Doyle has developed during his career that relate to foster care and related issues. In another forthcoming paper, Doyle and some co-authors find that about 5 percent of U.S. children spend some time in foster care — a number that appears to be fairly common internationally, too.

“People don’t appreciate how common child protective services and foster care are,” Doyle says. Moreover, he adds, “Children involved in these systems are particularly vulnerable.”

With a variety of U.S. jurisdictions running their own foster-care systems, Doyle notes that many people have the opportunity to usefully learn about the Mi Abogado program and consider if its principles might be worth testing. And while that requires some political will, Doyle expresses optimism that policymakers might be open to new ideas.

“It’s not really a partisan issue,” Doyle says. “Most people want to help protect kids, and, if an intervention is needed for kids, have an interest in making the intervention run well.”

After all, he notes, the impact of the Mi Abogado program appears to be both substantial and lasting, making it an interesting example to consider.

“Here we have a case where the child outcomes are improved and the government saved money,” Doyle observes. “I’d like to see more experimentation with programs like this in other places.”

Support for the research was provided in part by the MIT Sloan Latin America Office. Chile’s Studies Department of the Ministry of Education made data available from the education system.


The high-tech wizardry of integrated photonics

PhD candidate Sabrina Corsetti builds photonic devices that manipulate light to enable previously unimaginable applications, like pocket-sized 3D printers.


Inspired by the “Harry Potter” stories and the Disney Channel show “Wizards of Waverly Place,” 7-year-old Sabrina Corsetti emphatically declared to her parents one afternoon that she was, in fact, a wizard.

“My dad turned to me and said that, if I really wanted to be a wizard, then I should become a physicist. Physicists are the real wizards of the world,” she recalls.

That conversation stuck with Corsetti throughout her childhood, all the way up to her decision to double-major in physics and math in college, which set her on a path to MIT, where she is now a graduate student in the Department of Electrical Engineering and Computer Science.

While her work may not involve incantations or magic wands, Corsetti’s research centers on an area that often produces astonishing results: integrated photonics. A relatively young field, integrated photonics involves building computer chips that route light instead of electricity, enabling compact and scalable solutions for applications ranging from communications to sensing.

Corsetti and her collaborators in the Photonics and Electronics Research Group, led by Professor Jelena Notaros, develop chip-sized devices which enable innovative applications that push the boundaries of what is possible in optics.

For instance, Corsetti and the team developed a chip-based 3D printer, small enough to sit in the palm of one’s hand, that emits a reconfigurable beam of light into resin to create solid shapes. Such a device could someday enable a user to rapidly fabricate customized, low-cost objects on the go.

She also contributed to creating a miniature “tractor beam” that uses a beam of light to capture and manipulate biological particles using a chip. This could help biologists study DNA or investigate the mechanisms of disease without contaminating tissue samples.

More recently, Corsetti has been working on a project in collaboration with MIT Lincoln Laboratory, focused on trapped-ion quantum computing, which involves the manipulation of ions to store and process quantum information.

“Our team has a strong focus on designing devices and systems that interact with the environment. The opportunity to join a new research group, led by a supportive and engaged advisor, that works on projects with a lot of real-world impacts, is primarily what drew me to MIT,” Corsetti says.

Embracing challenges

Years before she set foot in a research lab, Corsetti was a science- and math-focused kid growing up with her parents and younger brother in the suburbs of Chicago, where her family operates a structural steelwork company.

Throughout her childhood, her teachers fostered her love of learning, from her early years in the Frankfort 157-C school district through her time at the Lincoln-Way East High School.

She enjoyed working on science experiments outside the classroom and relished the chance to tackle complex conundrums during independent study projects curated by her teachers (like calculating the math behind the Brachistochrone Curve, or the shortest path between two points, which was famously solved by Isaac Newton).

Corsetti decided to double-major in physics and math at the University of Michigan after graduating from high school a year early.

“When I went to the University of Michigan, I couldn’t wait to get started. I enrolled in the toughest math and physics track right off the bat,” she recalls.

But Corsetti soon found that she had bitten off a bit more than she could chew. A lot of her tough undergraduate courses assumed students had prior knowledge from AP physics and math classes, which Corsetti hadn’t taken because she graduated early.

She met with professors, attended office hours, and tried to pick up the lessons she had missed, but felt so discouraged she contemplated switching majors. Before she made the switch, Corsetti decided to try working in a physics lab to see if she liked a day in the life of a researcher.

After joining Professor Wolfgang Lorenzon’s lab at Michigan, Corsetti spent hours working with grad students and postdocs on a hands-on project to build cells that would hold liquid hydrogen for a particle physics experiment.

As they collaborated for hours at a time to roll material into tubes, she peppered the older students with questions about their experiences in the field.

“Being in the lab made me fall in love with physics. I really enjoyed that environment, working with my hands, and working with people as part of a bigger team,” she says.

Her affinity for hands-on lab work was amplified a few years later when she met Professor Tom Schwarz, her research advisor for the rest of her time at Michigan.

Following a chance conversation with Schwarz, she applied to a research abroad program at CERN in Switzerland, where she was mentored by Siyuan Sun. There, she had the opportunity to join thousands of physicists and engineers on the ATLAS project, writing code and optimizing circuits for new particle-detector technologies.

“That was one of the most transformative experiences of my life. After I came back to Michigan, I was ready to spend my career focusing on research,” she says.

Hooked on photonics

Corsetti began applying to graduate schools but decided to shift focus from the more theoretical particle physics to electrical engineering, with an interest in conducting hands-on chip-design and testing research.

She applied to MIT with a focus on standard electronic-chip design, so it came as a surprise when Notaros reached out to her to schedule a Zoom call. At the time, Corsetti was completely unfamiliar with integrated photonics. However, after one conversation with the new professor, she was hooked.

“Jelena has an infectious enthusiasm for integrated photonics,” she recalls. “After those initial conversations, I took a leap of faith.”

Corsetti joined Notaros’ team as it was just getting started. Closely mentored by a senior student, Milica Notaros, she and her cohort grew immersed in integrated photonics.

Over the years, she’s particularly enjoyed the collaborative and close-knit nature of the lab and how the work involves so many different aspects of the experimental process, from design to simulation to analysis to hardware testing.

“An exciting challenge that we’re always running up against is new chip-fabrication requirements. There is a lot of back-and-forth between new application areas that demand new fabrication technologies, followed by improved fabrication technologies motivating additional application areas. That cycle is constantly pushing the field forward,” she says.

Corsetti plans to stay at the cutting edge of the field after graduation as an integrated-photonics researcher in industry or at a national lab. She would like to focus on trapped-ion quantum computing, which scientists are rapidly scaling up toward commercially viable systems, or other high-performance computing applications.

“You really need accelerated computing for any modern research area. It would be exciting and rewarding to contribute to high-performance computing that can enable a lot of other interesting research areas,” she says.

Paying it forward

In addition to making an impact with research, Corsetti is focused on making a personal impact in the lives of others. Through her involvement in MIT Graduate Hillel, she joined the Jewish Big Brothers Big Sisters of Boston, where she volunteers for the friend-to-friend program.

Participating in the program, which pairs adults who have disabilities with friends in the community for fun activities like watching movies or painting has been an especially uplifting and gratifying experience for Corsetti.

She’s also enjoyed the opportunity to support, mentor, and bond with her fellow MIT EECS students, drawing on the advice she’s received throughout her own academic journey.

“Don’t trust feelings of imposter syndrome,” she advises others. “Keep moving forward, ask for feedback and help, and be confident that you will reach a point where you can make meaningful contributions to a team.”

Outside the lab, she enjoys playing classical music on the clarinet (her favorite piece is Leonard Bernstein’s famous overture to “Candide”), reading, and caring for a family of fish in her aquarium.


MIT student wins first-ever Stephen Hawking Junior Medal for Science Communication

Gitanjali Rao, a rising junior majoring in biological engineering, received the prestigious award created by the late theoretical physicist, cosmologist, and author.


Gitanjali Rao, a rising junior at MIT majoring in biological engineering, has been named the first-ever recipient of the Stephen Hawking Junior Medal for Science Communication. This award, presented by the Starmus Festival, is a new category of the already prestigious award created by the late theoretical physicist, cosmologist, and author Stephen Hawking and the Starmus Festival.

“I spend a lot of time in labs,” says Rao, highlighting her Undergraduate Research Opportunities Program project in the Langer Lab. Along with her curiosity to explore, she also has a passion for helping others understand what happens inside the lab. “We very rarely discuss why science communication is important,” she says. “Stephen Hawking was incredible at that.”

Rao is the inventor of Epione, a device for early diagnosis of prescription opioid addiction, and Kindly, an anti-cyber-bullying service powered by AI and natural language processing. Kindly is now a United Nations Children's Fund “Digital Public Good” service and is accessible worldwide. These efforts, among others, brought her to the attention of the Starmus team.

The award ceremony was held last April at the Kennedy Center in Washington, where Rao gave a speech and met acclaimed scientists, artists, and musicians. “It was one for the books,” she says. “I met Brian May from Queen — he's a physicist.” Rao is also a musician in her own right — she plays bass guitar and piano, and she's been learning to DJ at MIT. “Starmus” is a portmanteau of “stars” and “music.”

Originally from Denver, Colorado, Rao attended a STEM-focused school before MIT. Looking ahead, she's open to graduate school, and dreams of launching a biotech startup when the right idea comes.

The medal comes with an internship opportunity that Rao hopes to use for fieldwork or experience in the pharmaceutical industry. She’s already secured a summer internship at Moderna, and is considering spending Independent Activities Period abroad. “Hopefully, I'll have a better idea in the next few months.”
 


How repetition helps art speak to us

Jay Keyser’s new book, “Play It Again, Sam,” makes the case that repeated motifs enhance our experience of artistic works.


Often when we listen to music, we just instinctually enjoy it. Sometimes, though, it’s worth dissecting a song or other composition to figure out how it’s built.

Take the 1953 jazz standard “Satin Doll,” written by Duke Ellington and Billy Strayhorn, whose subtle structure rewards a close listening. As it happens, MIT Professor Emeritus Samuel Jay Keyser, a distinguished linguist and an avid trombonist on the side, has given the song careful scrutiny.

To Keyser, “Satin Doll” is a glittering example of what he calls the “same/except” construction in art. A basic rhyme, like “rent” and “tent,” is another example of this construction, given the shared rhyming sound and the different starting consonants.

In “Satin Doll,” Keyser observes, both the music and words feature a “same/except” structure. For instance, the rhythm of the first two bars of “Satin Doll” is the same as the second two bars, but the pitch goes up a step in bars three and four. An intricate pattern of this prevails throughout the entire body of “Satin Doll,” which Keyser calls “a musical rhyme scheme.”

When lyricist Johnny Mercer wrote words for “Satin Doll,” he matched the musical rhyme scheme. One lyric for the first four bars is, “Cigarette holder / which wigs me / Over her shoulder / she digs me.” Other verses follow the same pattern.

“Both the lyrics and the melody have the same rhyme scheme in their separate mediums, words and music, namely, A-B-A-B,” says Keyser. “That’s how you write lyrics. If you understand the musical rhyme scheme, and write lyrics to match that, you are introducing a whole new level of repetition, one that enhances the experience.”

Now, Keyser has a new book out about repetition in art and its cognitive impact on us, scrutinizing “Satin Doll” along with many other works of music, poetry, painting, and photography. The volume, “Play It Again, Sam: Repetition in the Arts,” is published by the MIT Press. The title is partly a play on Keyser’s name.

Inspired by the Margulis experiment

The genesis of “Play It Again, Sam” dates back several years, when Keyser encountered an experiment conducted by musicologist Elizabeth Margulis, described in her 2014 book, “On Repeat.” Margulis found that when she altered modern atonal compositions to add repetition to them, audiences ranging from ordinary listeners to music theorists preferred these edited versions to the original works.

“The Margulis experiment really caused the ideas to materialize,” Keyser says. He then examined repetition across art forms that featured research on associated cognitive activity, especially music, poetry, and the visual arts. For instance, the brain has distinct locations dedicated to the recognition of faces, places, and bodies. Keyser suggests this is why, prior to the advent of modernism, painting was overwhelmingly mimetic.

Ideally, he suggests, it will be possible to more comprehensively study how our brains process art — to see if encountering repetition triggers an endorphin release, say. For now, Keyser postulates that repetition involves what he calls the 4 Ps: priming, parallelism, prediction, and pleasure. Essentially, hearing or seeing a motif sets the stage for it to be repeated, providing audiences with satisfaction when they discover the repetition.

With remarkable range, Keyser vigorously analyzes how artists deploy repetition and have thought about it, from “Beowulf” to Leonard Bernstein, from Gustave Caillebotte to Italo Calvino. Some artworks do deploy identical repetition of elements, such as the Homeric epics; others use the “same/except” technique.

Keyser is deeply interested in visual art displaying the “same/except” concept, such as Andy Warhol’s famous “Campbell Soup Cans” painting. It features four rows of eight soup cans, which are all the same — except for the kind of soup on each can.

“Discovering this ‘same/except’ repetition in a work of art brings pleasure,” Keyser says.

But why is this? Multiple experimental studies, Keyser notes, suggest that repeated exposure of a subject to an image — such as an infant’s exposure to its mother’s face — helps create a bond of affection. This is the “mere exposure” phenomenon, posited by social psychologist Robert Zajonc, who as Keyser notes in the book, studied in detail “the repetition of an arbitrary stimulus and the mild affection that people eventually have for it.”

This tendency also helps explain why product manufacturers create ads with just the name of their products in ads: Seen often enough, the viewer bonds with the name. However the mechanism connecting repetition with pleasure works, and whatever its original function, Keyser argues that many artists have successfully tapped into it, grasping that audiences like repetition in poetry, painting, and music.

A shadow dog in Albuquerque

In the book, Keyser’s emphasis on repetition generates some distinctive interpretive positions. In one chapter, he digs into Lee Friendlander’s well-known photo, “Albuquerque, New Mexico,” a street scene with a jumble of signs, wires, and buildings, often interpreted in symbolic terms: It’s the American West frontier being submerged under postwar concrete and commerce.

Keyser, however, has a really different view of the Friendlander photo. There is a dog sitting near the middle of it; to the right is the shadow of a street sign. Keyser believes the shadow resembles the dog, and thinks it creates playful repetition in the photo.

“This particular photograph is really two photographs that rhyme,” Keyser says.“They’re the same, except one is the dog and one is the shadow. And that’s why that photograph is pleasurable, because you see that, even if you may not be fully aware of it. Sensing repetition in a work of art brings pleasure.”

“Play It Again, Sam” has received praise from arts practitioners, among others. George Darrah, principal drummer and arranger of the Boston Pops Orchestra, has called the book “extraordinary” in its “demonstration of the ways that poetry, music, painting, and photography engender pleasure in their audiences by exploiting the ability of the brain to detect repetition.” He adds that “Keyser has an uncanny ability to simplify complex ideas so that difficult material is easily understandable.”

In certain ways “Play It Again, Sam” contains the classic intellectual outlook of an MIT linguist. For decades, MIT-linked linguistics research has identified the universal structures of human language, revealing important similarities despite the seemingly wild variation of global languages. And here too, Keyser finds patterns that help organize an apparently boundless world of art. “Play It Again, Sam” is a hunt for structure.

Asked about this, Keyser acknowledges the influence of his longtime field on his current intellectual explorations, while noting that his insights about art are part of a greater investigation into our works and minds.

“I’m bringing a linguistic habit of mind to art,” Keyser says. “But I’m also pointing an analytical lens in the direction of natural predilections of the brain. The idea is to investigate how our aesthetic sense depends on the way the mind works. I’m trying to show how art can exploit the brain’s capacity to produce pleasure from non-art related functions.”


MIT engineers develop electrochemical sensors for cheap, disposable diagnostics

Electrodes coated with DNA could enable inexpensive tests with a long shelf-life, which could detect many diseases and be deployed in the doctor’s office or at home.


Using an inexpensive electrode coated with DNA, MIT researchers have designed disposable diagnostics that could be adapted to detect a variety of diseases, including cancer or infectious diseases such as influenza and HIV.

These electrochemical sensors make use of a DNA-chopping enzyme found in the CRISPR gene-editing system. When a target such as a cancerous gene is detected by the enzyme, it begins shearing DNA from the electrode nonspecifically, like a lawnmower cutting grass, altering the electrical signal produced.

One of the main limitations of this type of sensing technology is that the DNA that coats the electrode breaks down quickly, so the sensors can’t be stored for very long and their storage conditions must be tightly controlled, limiting where they can be used. In a new study, MIT researchers stabilized the DNA with a polymer coating, allowing the sensors to be stored for up to two months, even at high temperatures. After storage, the sensors were able to detect a prostate cancer gene that is often used to diagnose the disease.

The DNA-based sensors, which cost only about 50 cents to make, could offer a cheaper way to diagnose many diseases in low-resource regions, says Ariel Furst, the Paul M. Cook Career Development Assistant Professor of Chemical Engineering at MIT and the senior author of the study.

“Our focus is on diagnostics that many people have limited access to, and our goal is to create a point-of-use sensor. People wouldn’t even need to be in a clinic to use it. You could do it at home,” Furst says.

MIT graduate student Xingcheng Zhou is the lead author of the paper, published June 30 in the journal ACS Sensors. Other authors of the paper are MIT undergraduate Jessica Slaughter, Smah Riki ’24, and graduate student Chao Chi Kuo.

An inexpensive sensor

Electrochemical sensors work by measuring changes in the flow of an electric current when a target molecule interacts with an enzyme. This is the same technology that glucose meters use to detect concentrations of glucose in a blood sample.

The electrochemical sensors developed in Furst’s lab consist of DNA adhered to an inexpensive gold leaf electrode, which is laminated onto a sheet of plastic. The DNA is attached to the electrode using a sulfur-containing molecule known as a thiol.

In a 2021 study, Furst’s lab showed that they could use these sensors to detect genetic material from HIV and human papillomavirus (HPV). The sensors detect their targets using a guide RNA strand, which can be designed to bind to nearly any DNA or RNA sequence. The guide RNA is linked to an enzyme called Cas12, which cleaves DNA nonspecifically when it is turned on and is in the same family of proteins as the Cas9 enzyme used for CRISPR genome editing.

If the target is present, it binds to the guide RNA and activates Cas12, which then cuts the DNA adhered to the electrode. That alters the current produced by the electrode, which can be measured using a potentiostat (the same technology used in handheld glucose meters).

“If Cas12 is on, it’s like a lawnmower that cuts off all the DNA on your electrode, and that turns off your signal,” Furst says.

In previous versions of the device, the DNA had to be added to the electrode just before it was used, because DNA doesn’t remain stable for very long. In the new study, the researchers found that they could increase the stability of the DNA by coating it with a polymer called polyvinyl alcohol (PVA).

This polymer, which costs less than 1 cent per coating, acts like a tarp that protects the DNA below it. Once deposited onto the electrode, the polymer dries to form a protective thin film.

“Once it’s dried, it seems to make a very strong barrier against the main things that can harm DNA, such as reactive oxygen species that can either damage the DNA itself or break the thiol bond with the gold and strip your DNA off the electrode,” Furst says.

Successful detection

The researchers showed that this coating could protect DNA on the sensors for at least two months, and it could also withstand temperatures up to about 150 degrees Fahrenheit. After two months, they rinsed off the polymer and demonstrated that the sensors could still detect PCA3, a prostate cancer gene that can be found in urine.

This type of test could be used with a variety of samples, including urine, saliva, or nasal swabs. The researchers hope to use this approach to develop cheaper diagnostics for infectious diseases, such as HPV or HIV, that could be used in a doctor’s office or at home. This approach could also be used to develop tests for emerging infectious diseases, the researchers say.

A group of researchers from Furst’s lab was recently accepted into delta v, MIT’s student venture accelerator, where they hope to launch a startup to further develop this technology. Now that the researchers can create tests with a much longer shelf-life, they hope to begin shipping them to locations where they could be tested with patient samples.

“Our goal is to continue to test with patient samples against different diseases in real world environments,” Furst says. “Our limitation before was that we had to make the sensors on site, but now that we can protect them, we can ship them. We don’t have to use refrigeration. That allows us to access a lot more rugged or non-ideal environments for testing.”

The research was funded, in part, by the MIT Research Support Committee and a MathWorks Fellowship.


New imaging technique reconstructs the shapes of hidden objects

By leveraging reflections from wireless signals like Wi-Fi, the system could allow robots to find and manipulate items that are blocked from view.


A new imaging technique developed by MIT researchers could enable quality-control robots in a warehouse to peer through a cardboard shipping box and see that the handle of a mug buried under packing peanuts is broken.

Their approach leverages millimeter wave (mmWave) signals, the same type of signals used in Wi-Fi, to create accurate 3D reconstructions of objects that are blocked from view.

The waves can travel through common obstacles like plastic containers or interior walls, and reflect off hidden objects. The system, called mmNorm, collects those reflections and feeds them into an algorithm that estimates the shape of the object’s surface.

This new approach achieved 96 percent reconstruction accuracy on a range of everyday objects with complex, curvy shapes, like silverware and a power drill. State-of-the-art baseline methods achieved only 78 percent accuracy.

In addition, mmNorm does not require additional bandwidth to achieve such high accuracy. This efficiency could allow the method to be utilized in a wide range of settings, from factories to assisted living facilities.

For instance, mmNorm could enable robots working in a factory or home to distinguish between tools hidden in a drawer and identify their handles, so they could more efficiently grasp and manipulate the objects without causing damage.

“We’ve been interested in this problem for quite a while, but we’ve been hitting a wall because past methods, while they were mathematically elegant, weren’t getting us where we needed to go. We needed to come up with a very different way of using these signals than what has been used for more than half a century to unlock new types of applications,” says Fadel Adib, associate professor in the Department of Electrical Engineering and Computer Science, director of the Signal Kinetics group in the MIT Media Lab, and senior author of a paper on mmNorm.

Adib is joined on the paper by research assistants Laura Dodds, the lead author, and Tara Boroushaki, and former postdoc Kaichen Zhou. The research was recently presented at the Annual International Conference on Mobile Systems, Applications and Services.

Reflecting on reflections

Traditional radar techniques send mmWave signals and receive reflections from the environment to detect hidden or distant objects, a technique called back projection.

This method works well for large objects, like an airplane obscured by clouds, but the image resolution is too coarse for small items like kitchen gadgets that a robot might need to identify.

In studying this problem, the MIT researchers realized that existing back projection techniques ignore an important property known as specularity. When a radar system transmits mmWaves, almost every surface the waves strike acts like a mirror, generating specular reflections.

If a surface is pointed toward the antenna, the signal will reflect off the object to the antenna, but if the surface is pointed in a different direction, the reflection will travel away from the radar and won’t be received.

“Relying on specularity, our idea is to try to estimate not just the location of a reflection in the environment, but also the direction of the surface at that point,” Dodds says.

They developed mmNorm to estimate what is called a surface normal, which is the direction of a surface at a particular point in space, and use these estimations to reconstruct the curvature of the surface at that point.

Combining surface normal estimations at each point in space, mmNorm uses a special mathematical formulation to reconstruct the 3D object.

The researchers created an mmNorm prototype by attaching a radar to a robotic arm, which continually takes measurements as it moves around a hidden item. The system compares the strength of the signals it receives at different locations to estimate the curvature of the object’s surface.

For instance, the antenna will receive the strongest reflections from a surface pointed directly at it and weaker signals from surfaces that don’t directly face the antenna.

Because multiple antennas on the radar receive some amount of reflection, each antenna “votes” on the direction of the surface normal based on the strength of the signal it received.

“Some antennas might have a very strong vote, some might have a very weak vote, and we can combine all votes together to produce one surface normal that is agreed upon by all antenna locations,” Dodds says.

In addition, because mmNorm estimates the surface normal from all points in space, it generates many possible surfaces. To zero in on the right one, the researchers borrowed techniques from computer graphics, creating a 3D function that chooses the surface most representative of the signals received. They use this to generate a final 3D reconstruction.

Finer details

The team tested mmNorm’s ability to reconstruct more than 60 objects with complex shapes, like the handle and curve of a mug. It generated reconstructions with about 40 percent less error than state-of-the-art approaches, while also estimating the position of an object more accurately.

Their new technique can also distinguish between multiple objects, like a fork, knife, and spoon hidden in the same box. It also performed well for objects made from a range of materials, including wood, metal, plastic, rubber, and glass, as well as combinations of materials, but it does not work for objects hidden behind metal or very thick walls.

“Our qualitative results really speak for themselves. And the amount of improvement you see makes it easier to develop applications that use these high-resolution 3D reconstructions for new tasks,” Boroushaki says.

For instance, a robot can distinguish between multiple tools in a box, determine the precise shape and location of a hammer’s handle, and then plan to pick it up and use it for a task. One could also use mmNorm with an augmented reality headset, enabling a factory worker to see lifelike images of fully occluded objects.

It could also be incorporated into existing security and defense applications, generating more accurate reconstructions of concealed objects in airport security scanners or during military reconnaissance.

The researchers want to explore these and other potential applications in future work. They also want to improve the resolution of their technique, boost its performance for less reflective objects, and enable the mmWaves to effectively image  through thicker occlusions.

“This work really represents a paradigm shift in the way we are thinking about these signals and this 3D reconstruction process. We’re excited to see how the insights that we’ve gained here can have a broad impact,” Dodds says.

This work is supported, in part, by the National Science Foundation, the MIT Media Lab, and Microsoft.


President Emeritus Reif reflects on successes as a technical leader

At a fireside chat, L. Rafael Reif and Anantha P. Chandrakasan discussed the importance of developing engineering leadership skills to solve the world’s most challenging problems.


As an electrical engineering student at Stanford University in the late 1970s, L. Rafael Reif was working on not only his PhD but also learning a new language.

“I didn’t speak English. And I saw that it was easy to ignore somebody who doesn’t speak English well,” Reif recalled. To him, that meant speaking with conviction.

“If you have tremendous technical skills, but you cannot communicate, if you cannot persuade others to embrace that, it’s not going to go anywhere. Without the combination, you cannot persuade the powers-that-be to embrace whatever ideas you have.”

Now MIT president emeritus, Reif recently joined Anantha P. Chandrakasan, chief innovation and strategy officer and dean of the School of Engineering (SoE), for a fireside chat. Their focus: the importance of developing engineering leadership skills — such as persuasive communication — to solve the world’s most challenging problems.

SoE’s Technical Leadership and Communication Programs (TLC) sponsored the chat. TLC teaches engineering leadership, teamwork, and technical communication skills to students, from undergrads to postdocs, through its four programs: Undergraduate Practice Opportunities Program (UPOP), Gordon-MIT Engineering Leadership Program (GEL), Communication Lab (Comm Lab), and Riccio-MIT Graduate Engineering Leadership Program (GradEL).

About 175 students, faculty, and guests attended the fireside chat. Relaxed, engaging, and humorous — Reif shared anecdotes and insights about technical leadership from his decades in leadership roles at MIT.

Reif had a transformational impact on MIT. Beginning as an assistant professor of electrical engineering in 1980, he rose to head of the Department of Electrical Engineering and Computer Science (EECS), then served as provost from 2005 to 2012 and MIT president from 2012 to 2022.

He was instrumental in creating the MIT Schwarzman College of Computing in 2018, as well as establishing and growing MITx online open learning and MIT Microsystems Technology Laboratories.

With an ability to peer over the horizon and anticipate what’s coming, Reif used an array of leadership skills to develop and implement clear visions for those programs.

“One of the things that I learned from you is that as a leader, you have to envision the future and make bets,” said Chandrakasan. “And you don’t just wait around for that. You have to drive it.”

Turning new ideas into reality often meant overcoming resistance. When Reif first proposed the College of Computing to some fellow MIT leaders, “they looked at me and they said, no way. This is too hard. It’s not going to happen. It’s going to take too much money. It’s too complicated. OK, then starts the argument.”

Reif seems to have relished “the argument,” or art of persuasion, during his time at MIT. Though hearing different perspectives never hurt.

“All of us have blind spots. I always try to hear all points of view. Obviously, you can’t integrate all of it. You might say, ‘Anantha, I heard you, but I disagree with you because of this.’ So, you make the call knowing all the options. That is something non-technical that I used in my career.”

On the technical side, Reif’s background as an electrical engineer shaped his approach to leadership.

“What’s beautiful about a technical education is that you understand that you can solve anything if you start with first principles. There are first principles in just about anything that you do. If you start with those, you can solve any problem.”

Also, applying systems-level thinking is critical — understanding that organizations are really systems with interconnected parts.

“That was really useful to me. Some of you in the audience have studied this. In a system, when you start tinkering with something over here, something over there will be affected. And you have to understand that. At a place like MIT, that’s all the time!”

Reif was asked: If he were assembling a dream team to tackle the world’s biggest challenges, what skills or capabilities would he want them to have?

“I think we need people who can see things from different directions. I think we need people who are experts in different disciplines. And I think we need people who are experts in different cultures. Because to solve the big problems of the planet, we need to understand how different cultures address different things.”

Reif’s upbringing in Venezuela strongly influenced his leadership approach, particularly when it comes to empathy, a key trait he values.

“My parents were immigrants. They didn’t have an education, and they had to do whatever they could to support the family. And I remember as a little kid seeing how people humiliated them because they were doing menial jobs. And I remember how painful it was to me. It is part of my fabric to respect every individual, to notice them. I have a tremendous respect for every individual, and for the ability of every individual that didn’t have the same opportunity that all of us here have to be somebody.”

Reif’s advice to students who will be the next generation of engineering leaders is to keep learning because the challenges ahead are multidisciplinary. He also reminded them that they are the future.

“What are our assets? The people in this room. When it comes to the ecosystem of innovation in America, what we work on is to create new roadmaps, expand the roadmaps, create new industries. Without that, we have nothing. Companies do a great job of taking what you come up with and making wonderful things with it. But the ideas, whether it’s AI, whether it’s deep learning, it comes from places like this.” 


Accelerating scientific discovery with AI

FutureHouse, co-founded by Sam Rodriques PhD ’19, has developed AI agents to automate key steps on the path toward scientific progress.


Several researchers have taken a broad view of scientific progress over the last 50 years and come to the same troubling conclusion: Scientific productivity is declining. It’s taking more time, more funding, and larger teams to make discoveries that once came faster and cheaper. Although a variety of explanations have been offered for the slowdown, one is that, as research becomes more complex and specialized, scientists must spend more time reviewing publications, designing sophisticated experiments, and analyzing data.

Now, the philanthropically funded research lab FutureHouse is seeking to accelerate scientific research with an AI platform designed to automate many of the critical steps on the path toward scientific progress. The platform is made up of a series of AI agents specialized for tasks including information retrieval, information synthesis, chemical synthesis design, and data analysis.

FutureHouse founders Sam Rodriques PhD ’19 and Andrew White believe that by giving every scientist access to their AI agents, they can break through the biggest bottlenecks in science and help solve some of humanity’s most pressing problems.

“Natural language is the real language of science,” Rodriques says. “Other people are building foundation models for biology, where machine learning models speak the language of DNA or proteins, and that’s powerful. But discoveries aren’t represented in DNA or proteins. The only way we know how to represent discoveries, hypothesize, and reason is with natural language.”

Finding big problems

For his PhD research at MIT, Rodriques sought to understand the inner workings of the brain in the lab of Professor Ed Boyden.

“The entire idea behind FutureHouse was inspired by this impression I got during my PhD at MIT that even if we had all the information we needed to know about how the brain works, we wouldn’t know it because nobody has time to read all the literature,” Rodriques explains. “Even if they could read it all, they wouldn’t be able to assemble it into a comprehensive theory. That was a foundational piece of the FutureHouse puzzle.”

Rodriques wrote about the need for new kinds of large research collaborations as the last chapter of his PhD thesis in 2019, and though he spent some time running a lab at the Francis Crick Institute in London after graduation, he found himself gravitating toward broad problems in science that no single lab could take on.

“I was interested in how to automate or scale up science and what kinds of new organizational structures or technologies would unlock higher scientific productivity,” Rodriques says.

When Chat-GPT 3.5 was released in November 2022, Rodriques saw a path toward more powerful models that could generate scientific insights on their own. Around that time, he also met Andrew White, a computational chemist at the University of Rochester who had been granted early access to Chat-GPT 4. White had built the first large language agent for science, and the researchers joined forces to start FutureHouse.

The founders started out wanting to create distinct AI tools for tasks like literature searches, data analysis, and hypothesis generation. They began with data collection, eventually releasing PaperQA in September 2024, which Rodriques calls the best AI agent in the world for retrieving and summarizing information in scientific literature. Around the same time, they released Has Anyone, a tool that lets scientists determine if anyone has conducted specific experiments or explored specific hypotheses.

“We were just sitting around asking, ‘What are the kinds of questions that we as scientists ask all the time?’” Rodriques recalls.

When FutureHouse officially launched its platform on May 1 of this year, it rebranded some of its tools. Paper QA is now Crow, and Has Anyone is now called Owl. Falcon is an agent capable of compiling and reviewing more sources than Crow. Another new agent, Phoenix, can use specialized tools to help researchers plan chemistry experiments. And Finch is an agent designed to automate data driven discovery in biology.

On May 20, the company demonstrated a multi-agent scientific discovery workflow to automate key steps of the scientific process and identify a new therapeutic candidate for dry age-related macular degeneration (dAMD), a leading cause of irreversible blindness worldwide. In June, FutureHouse released ether0, a 24B open-weights reasoning model for chemistry.

“You really have to think of these agents as part of a larger system,” Rodriques says. “Soon, the literature search agents will be integrated with the data analysis agent, the hypothesis generation agent, an experiment planning agent, and they will all be engineered to work together seamlessly.”

Agents for everyone

Today anyone can access FutureHouse’s agents at platform.futurehouse.org. The company’s platform launch generated excitement in the industry, and stories have started to come in about scientists using the agents to accelerate research.

One of FutureHouse’s scientists used the agents to identify a gene that could be associated with polycystic ovary syndrome and come up with a new treatment hypothesis for the disease. Another researcher at the Lawrence Berkeley National Laboratory used Crow to create an AI assistant capable of searching the PubMed research database for information related to Alzheimer’s disease.

Scientists at another research institution have used the agents to conduct systematic reviews of genes relevant to Parkinson’s disease, finding FutureHouse’s agents performed better than general agents.

Rodriques says scientists who think of the agents less like Google Scholar and more like a smart assistant scientist get the most out of the platform.

“People who are looking for speculation tend to get more mileage out of Chat-GPT o3 deep research, while people who are looking for really faithful literature reviews tend to get more out of our agents,” Rodriques explains.

Rodriques also thinks FutureHouse will soon get to a point where its agents can use the raw data from research papers to test the reproducibility of its results and verify conclusions.

In the longer run, to keep scientific progress marching forward, Rodriques says FutureHouse is working on embedding its agents with tacit knowledge to be able to perform more sophisticated analyses while also giving the agents the ability to use computational tools to explore hypotheses.

“There have been so many advances around foundation models for science and around language models for proteins and DNA, that we now need to give our agents access to those models and all of the other tools people commonly use to do science,” Rodriques says. “Building the infrastructure to allow agents to use more specialized tools for science is going to be critical.”


Nth Cycle is bringing critical metals refining to the U.S.

Co-founded by Professor Desirée Plata, the company is already producing nickel and cobalt from battery scrap in Ohio.


Much like Middle Eastern oil production in the 1970s, China today dominates the global refinement of critical metals that serve as the foundation of the United States economy. In the 1970s, America’s oil dependence led to shortages that slowed growth and brought huge spikes in prices. But in recent decades, U.S. fracking technology created a new way to extract oil, transforming the nation from one of the world’s largest oil importers to one of the largest exporters.

Today the U.S. needs another technological breakthrough to secure domestic supplies of metals like lithium, cobalt, copper, and rare earth elements, which are needed for everything from batteries to jet engines and electric motors. Nth Cycle thinks it has a solution.

The company was co-founded by MIT Associate Professor Desirée Plata, CEO Megan O’Connor, and Chief Scientist Chad Vecitis to recover critical metals from industrial waste and ores using a patented, highly efficient technology known as electro-extraction.

“America is an incredibly resource-rich nation — it’s just a matter of extracting and converting those resources for use. That’s the role of refining,” says O’Connor, who worked on electro-extraction as a PhD student with Plata, back when both were at Duke University. “By filling that gap in the supply chain, we can make the United States the largest producer of critical metals in the world.”

Since last year, Nth Cycle has been producing cobalt and nickel using its first commercial system in Fairfield, Ohio. The company’s modular refining systems, which are powered by electricity instead of fossil fuels, can be deployed in a fraction of the time of traditional metal refining plants. Now, Nth Cycle aims to deploy its modular systems around the U.S. and Europe to establish new supply chains for the materials that power our economy.

“About 85 percent of the world’s critical minerals are refined in China, so it’s an economic and national security issue for us,” O’Connor says. “Even if we mine the materials here — we do have one operational nickel mine in Michigan — we then ship it overseas to be refined. Those materials are required components of multiple industries. Everything from our phones to our cars to our defense systems depend on them. I like to say critical minerals are the new oil.”

From waste, an opportunity

In 2014, O’Connor and Plata attended a talk by Vecitis, then a professor at Harvard University, in which he discussed his work using electrochemical filters to destroy contaminants in pharmaceutical wastewater. As part of the research, he noticed the material was reacting with metal to create crystalline copper in the filters. Following the talk, Plata asked Vecitis if he’d ever thought about using the approach for metal separation. He hadn’t but was excited to try.

At the time, Plata and O’Connor were studying mineral-dense wastewater created as a byproduct of hydraulic fracturing for oil and gas.

“The original thought was: Could we use this technology to extract those metals?” O’Connor recalls.

The focus shifted to using the technology to recover metals from electronics waste, including sources like old phones, electric vehicles, and smartwatches.

Today, manufacturers and electronic waste facilities grind up end-of-life materials and send it to huge chemical refineries overseas, which heat up the metal into a molten liquid and put it through a series of acids and bases to distill the waste back into a pure form of the desired metal.

“Each of those acids and bases have to be transported as hazardous goods, and the process for making them has a large greenhouse gas and energy footprint,” Plata explains. “That makes the economics difficult to square in anything but huge, centralized facilities — and even then it’s a challenge.”

The United States and Europe have an abundance of end-of-life scrap material, but it’s dispersed, and environmental regulations have left the West few scalable refining options.

Instead of building a refinery, Nth Cycle’s team has built a modular refining system — dubbed “The Oyster” — which can reduce costs, waste, and time-to-market by being co-located onsite with recyclers, miners, and manufacturers. The Oyster uses electricity, chemical precipitation, and filtration to create the same metal refining chemicals as traditional methods. Today, the system can process more than 3,000 metric tons of scrap per year and be customized to produce different metals.

“Electro-extraction is one of the cleanest ways to recover metal,” Plata says.

Nth Cycle received early support from the U.S. Department of Energy, and when Plata came to MIT in 2018, Nth Cycle became part of the MIT Industrial Liaison Program’s STEX25 startup accelerator.

“What’s so important about being at a place like MIT is the entrepreneurial ecosystem and the ‘tough tech’ ethos of Cambridge,” Plata explains. “That’s been hugely important to the success of Nth Cycle and one of the reasons we moved the company to the greater Boston area. Being able to access talent and patient capital was key.”

Onshoring metal refining

Plata says one of the proudest moments of her career came last year at the groundbreaking ceremony for Nth Cycle’s first mixed hydroxide (nickel and cobalt) production facility in Ohio. Many of Nth Cycle’s new employees at the facility had previously worked at auto and chemical facilities in the town but are now working for what Nth Cycle calls the first commercial nickel refining facility for scrap in the country.

“O’Connor’s vision of elevating people while elevating the economy is an inspiring standard of practice,” Plata says.

Nth Cycle will own and operate other Oyster systems in a business model O’Connor describes as refining as a service, where customers own the final product. The company is looking to partner with scrap yards and industrial scrap collection facilities as well as manufacturers that generate waste.

Nth Cycle is mostly working to recover metals from batteries today, but it has also used its process to recover cobalt and nickel from spent catalyst material in the oil and gas industry. Moving forward, Nth Cycle hopes to apply its process to the biggest waste sources of them all: mining.

“The world needs more critical minerals like cobalt, nickel, lithium, and copper,” O’Connor says. “The only two places you can get those materials are from recycling and mining, and both of those sources need to be chemically refined. That’s where Nth Cycle comes in. A lot of people have a negative perception of mining, but if you have a technology that can reduce waste and reduce emissions, that’s how you get more mining in regions like the U.S. That’s the impact we want this technology to have in the Western world.”


Summer 2025 reading from MIT

Enjoy these recent titles from Institute faculty and staff.


Summer is the perfect time to curl up with a good book — and MIT authors have had much to offer in the past year. The following titles represent some of the books published in the past 12 months by MIT faculty and staff. In addition to links for each book from its publisher, the MIT Libraries has compiled a helpful list of the titles held in its collections.

Looking for more literary works from the MIT community? Enjoy our book lists from 2024, 2023, 2022, and 2021.

Happy reading!

Science

So Very Small: How Humans Discovered the Microcosmos, Defeated Germs — and May Still Lose the War Against Infectious Disease” (Penguin Random House, 2025)
By Thomas Levenson, professor of science writing

For centuries, people in the West, believing themselves to hold God-given dominion over nature, thought too much of humanity and too little of microbes. Nineteenth-century scientists finally made the connection. Life-saving methods to control infections and contain outbreaks soon followed. Next came the antibiotic era in the 1930s. Yet, less than a century later, the promise of that revolution is receding due to years of overuse. Is our self-confidence getting the better of us again?

The Miraculous from the Material: Understanding the Wonders of Nature” (Penguin Random House, 2024)
By Alan Lightman, professor of the practice of humanities

Nature is capable of extraordinary phenomena. Standing in awe of those phenomena, we experience a feeling of connection to the cosmos. For Lightman, just as remarkable is that all of what we see around us — soap bubbles, scarlet ibises, shooting stars — are made out of the same material stuff and obey the same rules and laws. Pairing 36 full-color photos evoking some of nature’s most awe-inspiring phenomena with personal essays, “The Miraculous from the Material” explores the fascinating science underlying the natural world.

Technology and society

The Analytics Edge in Healthcare” (Dynamic Ideas, 2025)
By Dimitris Bertsimas, vice provost for MIT Open Learning, Boeing Leaders for Global Operations Professor of Management, associate dean for business analytics, and professor of operations research; Agni Orfanoudaki, and Holly Wiberg

Analytics is transforming health care operations, empowering medical professionals and administrators to leverage data and models to make better decisions. This book provides a practical introduction to this exciting field. The first part establishes the technical foundations of health care analytics, spanning machine learning and optimization. The second part presents integrated case studies that cover a wide range of clinical specialties and problem types using descriptive, predictive, and prescriptive analytics.

Longevity Hubs: Regional Innovation for Global Aging”  (MIT Press, 2024)
Edited by Joseph F. Coughlin, senior research scientist and MIT AgeLab director, and Luke Yoquinto, MIT AgeLab research associate 

Populations around the world are aging, and older adults’ economic influence stands to grow markedly in future decades. This volume brings together entrepreneurs, researchers, designers, public servants, and others to address the multifaceted concerns of aging societies and to explore the possibility that certain regions will distinguish themselves as longevity hubs: home to disproportionate economic and innovative activity for older populations.

Data, Systems, and Society: Harnessing AI for Societal Good” (Cambridge University Press, 2025)
By Munther Dahleh, the William A. Coolidge Professor of Electrical Engineering and Computer Science and director of the Institute for Data, Systems, and Society (IDSS)

Harnessing the power of data and artificial intelligence (Al) methods to tackle complex societal challenges requires transdisciplinary collaborations across academia, industry, and government. In this book, Dahleh, founder of the MIT Institute for Data, Systems, and Society (IDSS), offers a blueprint for researchers, professionals, and institutions to create approaches to problems of high societal value using innovative, holistic, data-driven methods.

SuperShifts: Transforming How We Live, Learn, and Work in the Age of Intelligence” (Wiley, 2025)
By Ja-Naé Duane, academic research fellow at the MIT Center for Information Systems Research, and Steve Fisher

This book describes how we’re at the end of one 200-year arc and embarking on another. With this new age of intelligence, Duane and Fisher highlight the catalysts for change currently affecting individuals, businesses, and society as a whole. They also provide a model for transformation that utilizes a holistic view of making radical change through three lenses: you as a leader, your organization, and society.

Tech Agnostic: How Technology Became the World’s Most Powerful Religion, and Why It Desperately Needs a Reformation” (MIT Press, 2024)
By Greg Epstein, humanist chaplain

Today’s technology has overtaken religion as the chief influence on 21st-century life and community. In “Tech Agnostic,” Epstein explores what it means to be a critical thinker with respect to this new faith. Encouraging readers to reassert their common humanity beyond the seductive sheen of “tech,” this book argues for tech agnosticism — not worship — as a way of life.

The New Lunar Society: An Enlightenment Guide to the Next Industrial Revolution” (MIT Press, 2025)
By David Mindell, the Dibner Professor of the History of Engineering and Manufacturing and professor of aeronautics and astronautics 

Climate change, global disruption, and labor scarcity are forcing us to rethink the underlying principles of industrial society. In this book, Mindell envisions this new industrialism from the fundamentals, drawing on the 18th century when first principles were formed at the founding of the Industrial Revolution. While outlining the new industrialism, he tells the story of the Lunar Society, a group of engineers, scientists, and industrialists who came together to apply the principles of the Enlightenment to industrial processes.

Output: An Anthology of Computer-Generated Text, 1953–2023” (MIT Press, 2024) 
Edited by Nick Montfort, professor of digital media, and Lillian-Yvonne Bertram

The discussion of computer-generated text has recently reached a fever pitch but largely omits the long history of work in this area — text generation, as it happens, was not invented yesterday in Silicon Valley. This anthology aims to correct that omission by gathering seven decades of English-language texts produced by generation systems and software, long before ChatGPT and Claude.

Education, work, and innovation

Retiring: Creating a Life That Works for You” (Routledge, 2025)
By Lotte Bailyn, the T Wilson Professor of Management, Emerita and professor emerita of work and organization studies; Teresa M. Amabile; Marcy Crary; Douglas T. Hall; and Kathy E. Kram

Whether they’re one of the 73 million baby boomers reaching their full retirement benefit age or zoomers just entering the workforce, at some point most working Americans will retire. The optimal approach to retirement is unique to each person, but this book offers wisdom and anecdotes from more than 120 people and detailed interviews with 14 “stars” regarding their retirement transitions.

Accelerating Innovation: Competitive Advantage through Ecosystem Engagement” (MIT Press, 2025)
By Phil Budden, senior lecturer of technological Innovation, entrepreneurship, and strategic management; and Fiona Murray, associate dean for innovation, the William Porter Professor of Entrepreneurship, and professor of technological innovation, entrepreneurship, and strategic management 

Leaders in large organizations face continuous pressure to innovate, and few possess the internal resources needed to keep up with rapid advances in science and technology. But looking beyond their own organizations, most face a bewildering landscape of external resources. In “Accelerating Innovation,” leaders will find a practical guide to this external landscape. Budden and Murray provide directions for navigating innovation ecosystems — those hotspots worldwide where researchers, entrepreneurs, and investors congregate.

Writing, Thinking, and the Brain: How Neuroscience Can Improve Writing Instruction” (Teachers College Press, 2024)
By Jovi R. S. Nazareno, learning science and education outreach specialist at MIT Open Learning; Tracey Tokuhama-Espinosa; and Christopher Rappleye

Writing is the highest form of thinking, as evidenced by neuroimaging that shows how more neural networks are activated simultaneously during writing than during any other cognitive activity. This book will help teachers understand how the brain learns to write by unveiling 15 stages of thinking that underpin the writing process, along with targeted ways to stimulate them to maximize each individual’s writing potential.

Entrepreneurship: Choice and Strategy” (Norton Economics, 2024)
By Erin L. Scott, senior lecturer of technological innovation, entrepreneurship, and strategic management; Scott Stern, the David Sarnoff Professor of Management of Technology and professor of technological innovation, entrepreneurship, and strategic management; and Joshua Gans

Building on more than two decades of academic research with thousands of companies and MIT students, Scott, Stern, and Gans have developed a systematic approach for startup leadership. They detail four key choices entrepreneurs must make, and “four strategic approaches to find and frame opportunities.”

Failure by Design: The California Energy Crisis and the Limits of Market Planning” (University of Chicago, 2024)
By Georg Rilinger, the Fred Kayne Career Development Assistant Professor of Entrepreneurship and assistant professor of technological innovation, entrepreneurship, and strategic management

The California electricity crisis in 2000 caused billions in losses and led to bankruptcy for one of the state’s largest utilities. More than 20 years later, the question remains: Why did the newly created electricity markets fail? In “Failure by Design,” Rilinger explores practical obstacles to market design to offer a new explanation for the crisis — one that moves beyond previous interpretations that have primarily blamed incompetent politicians or corrupt energy sellers.

Culture, humanities, and social sciences

Chasing the Pearl-Manuscript: Speculation, Shapes, Delight” (University of Chicago Press, 2025)
By Arthur Bahr, professor of literature

In this book, Bahr explores the four poems and 12 illustrations of the “Pearl-Manuscript,” the only surviving medieval copy of two of the best-known Middle English poems: “Pearl” and “Sir Gawain and the Green Knight.” He explores how the physical manuscript enhances our perception of the poetry, drawing on recent technological advances that show it to be a more complex piece of material, visual, and textual art than previously understood. By connecting the manuscript’s construction to the text’s intricate language, Bahr suggests new ways to understand the power of poetry.

Taxation and Resentment: Race, Party, and Class in American Tax Attitudes” (Princeton University Press, 2025)
By Andrea Campbell, the Arthur and Ruth Sloan Professor of Political Science

Most Americans want the rich to pay more to fund government, yet favor regressive over progressive taxes. Why this policy-preference gap? In this book, Campbell describes how convoluted tax code confuses the public about who pays and who benefits, so tax preferences do not turn on principles, interests, or even party. Instead, race and racism play large roles, and tax skepticism among Americans of all stripes helps the rich and anti-tax forces undermine progressivity.

Uprooted: How post-WWII Population Transfers Remade Europe” (Cambridge University Press, 2024)
By Volha Charnysh, the Ford Career Development Associate Professor of Political Science

Each year, millions of people are uprooted from their homes by wars, repression, natural disasters, and climate change. In “Uprooted,” Charnysh presents a fresh perspective on the consequences of mass displacement, arguing that accommodating the displaced population can strengthen receiving states and benefit local economies. With rich insights and compelling evidence, the book challenges common assumptions about the costs of forced displacement and cultural diversity and proposes a novel mechanism linking wars to state-building.

Crime, Insecurity, and Community Policing: Experiments on Building Trust” (Cambridge University Press, 2024)
By Fotini Christia, the Ford International Professor of the Social Sciences; Graeme Blair; and Jeremy M. Weinstein

How can societies reduce crime without exacerbating adversarial relationships between the police and citizens? Through field experiments in a variety of political contexts, this book presents the outcome of a major research initiative into the efficacy of community policing. Scholars uncover whether, and under what conditions, this influential strategy for tackling crime and insecurity is effective. With its highly innovative approach to cumulative learning, this writing represents a new frontier in the study of police reform.

Letterlocking: The Hidden History of the Letter” (MIT Press, 2025)
By Jana Dambrogio, the Thomas F. Peterson Conservator at MIT Libraries, and Daniel Starza Smith 

Before the invention of the gummed envelope in the 1830s, how did people secure their private letters? The answer is letterlocking — the ingenious process of securing a letter using a combination of folds, tucks, slits, or adhesives such as sealing wax, so that it becomes its own envelope. In this book, Dambrogio and Starza Smith, experts who have pioneered the field over the last 10 years, tell the fascinating story of letterlocking within epistolary history, drawing on real historical examples from all over the world.

Long-Term Care around the World” (University of Chicago Press, 2025)
Edited by Jonathan Gruber, the Ford Professor of Economics and head of the Department of Economics, and Kathleen McGarry

As formal long-term care becomes unaffordable for seniors in many countries, public systems and unpaid caregivers increasingly bear the burden of supporting the world’s aging population. “Long-Term Care around the World” is a comparative analysis of long-term care in 10 wealthy countries that considers the social costs of both formal and informal care  —which is critical, given that informal unpaid care is estimated to account for one-third of all long-term care spending.

Empty Vessel: The Global Economy in One Barge” (Penguin Random House, 2025)
By Ian Kumekawa, lecturer of history

What do a barracks for British troops in the Falklands War, a floating jail off the Bronx, and temporary housing for VW factory workers in Germany have in common? The Balder Scapa: a single barge that served all three roles. Through this one vessel, Kumekawa illustrates many currents: globalization, the transience of economic activity, and the hazy world of transactions many call “the offshore,” the lightly regulated sphere of economic activity that encourages short-term actions.

The Price of Our Values: The Economic Limits of Moral Life” (University of Chicago Press, 2025)
By David Thesmar, the Franco Modigliani Professor of Financial Economics and professor of finance, and Augustin Landier

Two economists examine the interplay between our desire to be good, the personal costs of being good, and the point at which people abandon goodness due to its costs. Aided by the results of two surveys, they find that the answers to modern moral dilemmas are economic, and often highly predictable. Our values may guide us, but we are also forced to consider economic costs to settle decisions.

Spheres of Injustice: The Ethical Promise of Minority Presence” (MIT Press, 2025)
By Bruno Perreau, the Cynthia L. Reed Professor of French Studies 

How can the rights of minorities be protected in democracies? The question has been front and center in the U.S. since the Supreme Court’s repeal of affirmative action. In Europe too, minority politics are being challenged. The very notion of “minority” is being questioned, while the notion of a “protected class” risks encouraging competition among minorities. In “Spheres of Injustice,” Perreau demonstrates how we can make the fight against discrimination beneficial for all.

Attention, Shoppers! American Retail Capitalism and the Origins of the Amazon Economy” (Princeton University Press, 2025)
By Kathleen Thelen, the Ford Professor of Political Science

This book traces the evolution of U.S. retailing from the late 19th century to today, uncovering the roots of a bitter equilibrium where large low-cost retailers dominate and vast numbers of low-income families now rely on them to make ends meet. Thelen reveals how large discount retailers have successfully exploited a uniquely permissive regulatory landscape to create a shopper’s paradise built on cheap labor.

Routledge Handbook of Space Policy” (Routledge, 2024)
Chapter by Danielle R. Wood, associate professor in the program in media arts and sciences and associate professor in aeronautics and astronautics

In her chapter, “The Expanding Sphere of Human Responsibility for Sustainability on Earth and in Space,” Wood proposes a multifaceted definition of sustainability and explores how the definition can be exercised as humans expand activity in space. Building on the tradition of consensus building on concepts of sustainable development through United Nations initiatives, Wood asserts that sustainability for human activity in space requires consideration of three types of responsibility: economic, social, and environmental.

Victorian Parlour Games: A Modern Host’s Guide to Classic Fun for Everyone” (Chronicle Books, 2024)
By Ned Wolfe, marketing and communications assistant at MIT Libraries

“Victorian Parlour Games” is a beautifully designed and compact hardcover volume full of the classic, often silly, games played in the late 19th century. The Victorians loved fun and played hundreds and hundreds of party games. This endlessly delightful party games book collects some of the very best for your reference and pleasure.

Arts, architecture, planning, and design

Against Reason: Tony Smith, Sculpture, and Other Modernisms” (MIT Press, 2024)
Chapter by Judith Barry, professor in the Art, Culture, and Technology Program, with Kelli Anderson

This collection of essays reveals the depth and complexity of the sculpture of American modernist Tony Smith, placing his multifaceted practice in dialogue with contemporary voices. Barry’s chapter, "New Piece: Elective Geometries," describes the transformation of Smith’s sculpture into the form of a flipbook and centerpiece “pop-up.”

Steina” (MIT Press, 2025)
Edited by Natalie Bell, curator at the MIT List Visual Arts Center

Accompanying the related exhibition at MIT List Visual Arts Center and Buffalo AKG Art Museum, “Steina” brings renewed recognition to Steina (b. 1940, Iceland), tracing her oeuvre from early collaborative works with her partner Woody Vasulka to her independent explorations of optics and a liberated, non-anthropocentric subjectivity.

Jewish Theatrical Resources: A Guide for Theaters Producing Jewish Work” (Alliance for Jewish Theater, 2025)
Chapter by Marissa Friedman, marketing and communications manager in the Art, Culture, and Technology Program; Jenna Clark Embry; Robin Goldberg; Gabrielle Hoyt; Stephanie Kane; Alix Rosenfeld; and Marissa Shadburn

Produced by the Alliance for Jewish Theatre, this guide was created to help non-Jewish theaters produce Jewish plays with authenticity, cultural awareness, and care. Friedman contributes a chapter on dramaturgy, exploring how the primary role of a dramaturg is to support a playwright and production team in articulating their artistic vision, and setting forth an ideal model for the dramaturgy of a Jewish play, with both a theatrical dramaturg and a Jewish dramaturg.

Play It Again, Sam: Repetition in the Arts” (MIT Press, 2025)
By Samuel Jay Keyser, the Peter de Florez emeritus professor of linguistics

Leonard Bernstein, in his famous Norton Lectures, extolled repetition, saying that it gave poetry its musical qualities and that music theorists’ refusal to take it seriously did so at their peril. “Play It Again, Sam” takes Bernstein seriously. In this book, Keyser explores why we enjoy works of poetry, music, and painting, and how repetition plays a central part in the pleasure.

The Moving Image: A User’s Manual” (MIT Press, 2025)
By Peter B. Kaufman, associate director of development at MIT Open Learning

Video is today’s most popular information medium. Two-thirds of the world’s internet traffic is video. Americans get their news and information more often from screens and speakers than through any other means. “The Moving Image” is the first authoritative account of how we have arrived here, together with the first definitive manual to help writers, educators, and publishers use video more effectively.

Beyond Ruins: Reimagining Modernism” (ArchiTangle, 2024)
Edited by Raafat Majzoub SM ’17, visiting lecturer at the Art, Culture, and Technology Program; and Nicolas Fayad

This book explores the renovation of modern architecture in the Global South as a tool for self-determination and community-building. Focusing on the Oscar Niemeyer Guest House in Tripoli, Lebanon, Majzoub and Fayad examine heritage as a political and material process. Through case studies, visual essays, and conversations with architects, artists, and theorists, the book addresses challenges of preservation, gaps in archiving, and the need for new forms of architectural practice.

The Equitably Resilient City: Solidarities and Struggles in the Face of Climate Crisis” (MIT Press, 2024)
By Lawrence J. Vale, the Ford Professor of Urban Design and Planning and associate dean of the MIT School of Architecture and Planning; and Zachary B. Lamb

Too often the places most vulnerable to climate change are those that are home to people with the fewest economic and political resources. And while some leaders are starting to take action to reduce climate risks, many early adaptation schemes have actually made preexisting inequalities worse. In this book, Vale and Lamb ask how cities can adapt to climate change and other threats while also doing right by disadvantaged residents.

Novel and biography

The Novice of Thanatos: An Epic Dark Fantasy of Horror, Death, and Necromancy” (Satirrell Publishing, 2025)
By Scott Austin Tirrell, director of administration and finance at the Art, Culture, and Technology Program

A fantasy novel that follows 11-year-old Mishal, a gifted yet troubled boy inducted into the secretive Order of Thanatos. Set in the grim and mystic realm of Lucardia, the story is framed as a first-person memoir chronicling Mishal’s initiation as a novice psychopomp — one who guides the dead across the Threshold into the afterlife. As Mishal navigates the Order’s rigid hierarchy, academic rigor, and spiritual mysteries, he begins to uncover unsettling truths about death, the soul, and the hidden agendas of those in power. Haunted by a spirit he cannot abandon and burdened by a forbidden artifact, Mishal must decide whom to trust and what to believe as his abilities grow — and as the line between duty and damnation begins to blur.

For young readers

I Love You Bigger Than Everything That’s Big” (Stillwater River Publications, 2024)
By Lindsay Bartholomew, exhibit content and experience developer at MIT Museum, and illustrated by Sequoia Bostick

How much can you love someone? Higher than you can reach? Longer than a river? Bigger than the sky? The real answer — bigger than everything that’s big!

A Century for Caroline” (Denene Millner Books / Simon and Schuster, 2025)
By Kaija Langley, director of development at MIT Libraries, and illustrated by TeMika Grooms

A great-grandma imparts the wisdom gained over her 100 years to an eager little girl in this tender picture book tribute to family and living a long, purposeful, beautiful life.

All the Rocks We Love” (Penguin Random House, 2024)
By Taylor Perron, the Cecil and Ida Green Professor of Earth, Atmospheric and Planetary Sciences, and Lisa Varchol Perron, and illustrated by David Scheirer

It’s no secret that children love rocks: They appear in jacket pockets, on windowsills, in the car, in their hiding places, and most often, in little grips. This book is an appreciation of rocks’ versatility and appeal, paired with the presentation of real types of rocks and their play-worthy attributes. 


Evelyn Wang: A new energy source at MIT

MIT’s first vice president for energy and climate is working to accelerate research and development toward transformational solutions.


Evelyn Wang ’00 knows a few things about engineering solutions to hard problems. After all, she invented a way to pull water out of thin air.

Now, Wang is applying that problem-solving experience — and an enduring sense of optimism — toward the critical issue of climate change, to strengthen the American energy economy and ensure resilience for all.

Wang, a mechanical engineering professor by trade, began work this spring as MIT’s first vice president for energy and climate, overseeing the Institute’s expanding work on climate change. That means broadening the Institute’s already-wide research portfolio, scaling up existing innovations, seeking new breakthroughs, and channeling campus community input to drive work forward.

“MIT has the potential to do so much, when we know that climate, energy, and resilience are paramount to events happening around us every day,” says Wang, who is also the Ford Professor of Engineering at MIT. “There’s no better place than MIT to come up with the transformational solutions that can help shape our world.”

That also means developing partnerships with corporate allies, startups, government, communities, and other organizations. Tackling climate change, Wang says, “requires a lot of partnerships. It’s not an MIT-only endeavor. We’re going to have to collaborate with other institutions and think about where industry can help us deploy and scale so the impact can be greater.”

She adds: “The more partnerships we have, the more understanding we have of the best pathways to make progress in difficult areas.”

From MIT to ARPA-E

An MIT faculty member since 2007, Wang leads the Device Research Lab. Along with collaborators, she identifies new materials and optimizations based on heat and mass transport processes that unlock the creation of leading-edge innovations. Her development of the device that extracts water from even very dry air led Foreign Policy Magazine to name her its 2017 Global ReThinker, and she won the 2018 Eighth Prince Sultan bin Abdulaziz International Prize for Water.

Her research also extends to other areas such as energy and desalination research. In 2016, Wang and several colleagues announced a device based on nanophotonic crystals with the potential to double the amount of power produced by a given area of solar panels, which led to one of her graduate researchers on the project to co-found the startup Antora Energy. More recently, Wang and colleagues developed an aerogel that improves window insulation, now being commercialized through her former graduate students in a startup, AeroShield.

Wang also spent two years recently as director of the U.S. Department of Energy’s Advanced Research Projects Agency-Energy (ARPA-E), which supports early-stage R&D on energy generation, storage, and use.  Returning to MIT, she began her work as vice president for energy and climate in April, engaging with researchers, holding community workshops, and planning to build partnerships.

“I’ve been energized coming back to the Institute, given the talented students, the faculty, the staff. It’s invigorating to be back in this community,” Wang says. “People are passionate, excited, and mission-driven, and that’s the energy we need to make a big impact in the world.”

Wang is also working to help align the Institute’s many existing climate efforts. This includes the Climate Project at MIT, an Institute-wide presidential initiative announced in 2024, which aims to accelerate and scale up climate solutions while generating new tools and policy proposals. All told, about 300 MIT faculty conduct research related to climate issues in one form or another.

“The fact that there are so many faculty working on climate is astounding,” Wang says. “Everyone’s doing exciting work, but how can we leverage our unique strengths to create something bigger than the sum of its parts? That’s what I’m working toward. We’ve spun out so many technologies. How do we do more of that? How do we do that faster, and in a way so the world will feel the impact?”

A deep connection to campus — and strong sense of optimism

Understanding MIT is one of Wang’s strengths, given that she has spent over two decades at the Institute.

Wang earned her undergraduate degree from MIT in mechanical engineering, and her MS and PhD in mechanical engineering from Stanford University. She has held several chaired faculty positions at MIT. In 2008, Wang was named the Esther and Harold E. Edgerton Assistant Professor; in 2015, she was named the Gail E. Kendall Professor; and in 2021, she became the Ford Professor of Engineering. Wang served as head of the Department of Mechanical Engineering from 2018 through 2022.

As it happens, Wang’s parents, Kang and Edith, met as graduate students at the Institute. Her father, an electrical engineer, became a professor at the University of California at Los Angeles. Wang also met her husband at MIT, and both of her brothers graduated from the Institute.

Along with her deep institutional knowledge, administrative experience, and track record as an innovator, Wang is bringing several other things to her new role as vice president for energy and climate: a sense of urgency about the issue, coupled with a continual sense of optimism that innovators can meet society’s needs.

“I think optimism can make a difference, and is great to have in the midst of collective challenge,” Wang says. “We’re such a mission-driven university, and people come here to solve real-world problems.”

That hopeful approach is why Wang describes the work as not only as a challenge but also a generational opportunity. “We have the chance to design the world we want,” she says, “one that’s cleaner, more sustainable and more resilient. This future is ours to shape and build together.”

Wang thinks MIT contains many examples of world-shaping progress, She cites MIT’s announcement this month of the creation of the Schmidt Laboratory for Materials in Nuclear Technologies, at the MIT Plasma Science and Fusion center, to conduct research on next-generation materials that could help enable the construction of fusion power plants. Another example Wang references is MIT research earlier this year on developing clean ammonia, a way to make the world’s most widely-produced chemical with drastically-reduced greenhouse gas emissions.

“Those solutions could be breakthroughs,” Wang says. “Those are the kinds of things that give us optimism. There’s still a lot of research to be done, but it suggests the potential of what our world can be.”

Optimism: There’s that word again.

“Optimism is the only way to go,” Wang says. “Yes, the world is challenged. But this is where MIT’s strengths — in research, innovation, and education — can bring optimism to the table.”


Accelerating hardware development to improve national security and innovation

The alumni-founded startup Nominal has built a platform for building and testing complex systems like fighter jets, nuclear reactors, rockets, and robots.


Modern fighter jets contain hundreds or even thousands of sensors. Some of those sensors collect data every second, others every nanosecond. For the engineering teams building and testing those jets, all those data points are hugely valuable — if they can make sense of them.

Nominal is an advanced software platform made for engineers building complex systems ranging from fighter jets to nuclear reactors, satellites, rockets, and robots. Nominal’s flagship product, Nominal Core, helps teams organize, visualize, and securely share data from tests and operations. The company’s other product, Nominal Connect, helps engineers build custom applications for automating and syncing their hardware systems.

“It’s a very technically challenging problem to take the types of data that our customers are generating and get them into a single place where people can collaborate and get insights,” says Nominal co-founder Jason Hoch ’13. “It’s hard because you’re dealing with a lot of different data sources, and you want to be able to correlate those sources and apply mathematical formulas. We do that automatically.”

Hoch started Nominal with Cameron McCord ’13, SM ’14 and Bryce Strauss after the founders had to work with generic data tools or build their own solutions at places like Lockheed Martin and Anduril. Today, Nominal is working with organizations in aerospace, defense, robotics, manufacturing, and energy to accelerate the development of products critical for applications in U.S. national security and beyond.

“We built Nominal to take the best innovations in software and data technology and tailor them to the workflows that engineers go through when building and testing hardware systems,” McCord says. “We want to be the data and software backbone across all of these types of organizations.”

Accelerating hardware development

Hoch and McCord met during their first week at MIT and joined the same fraternity as undergraduates. Hoch double majored in mathematics and computer science and engineering, and McCord participated in the Navy Reserve Officers’ Training Corps (NROTC) while majoring in physics and nuclear science and engineering.

“MIT let me flex my technical skills, but I was also interested in the broader implications of technology and national security,” McCord says. “It was an interesting balance where I was learning the hardcore engineering skills, but always having a wider aperture to understand how the technology I was learning about was going to impact the world.”

Following MIT, McCord spent eight years in the Navy before working at the defense technology company Anduril, where he was charged with building the software systems to test different products. Hoch also worked at the intelligence and defense-oriented software company Palantir.

McCord met Strauss, who had worked as an engineer at Lockheed Martin, while the two were at Harvard Business School. The eventual co-founders realized they had each struggled with software during complex hardware development projects, and set out to build the tools they wished they’d had.

At the heart of Nominal’s platform is a unified database that can connect and organize hundreds of data sources in real-time. Nominal’s system allows engineers to search through or visualize that information, helping them spot trends, catch critical events, and investigate anomalies — what Nominal’s team describes as learning the rules governing complex systems.

“We’re trying to get answers to engineers so they understand what’s happening and can keep projects moving forward,” says Strauss. “Testing and validating these systems are fundamental bottlenecks for hardware progress. Our platform helps engineers answer questions like, ‘When we made a 30-degree turn at 16,000 feet, what happened to the engine’s temperature, and how does that compare to what happened yesterday?’”

By automating tasks like data stitching and visualization, Nominal’s platform helps accelerate post-test analysis and development processes for complex systems. And because the platform is cloud-hosted, engineers can easily share visualizations and other dynamic assets with members of their team as opposed to making static reports, allowing more people in an organization to interact directly with the data.

From satellites to drones, robots to rockets

Nominal recently announced a $75 million Series B funding round, led by Sequoia Capital, to accelerate their growth.

“We’ll use the funds to accelerate product roadmaps for our existing products, launch new products across the hardware test stack, and more than double our team,” says McCord.

Today, aerospace customers are using Nominal’s platform to monitor their assets in orbit. Manufacturers are using Nominal to make sure their components work as expected before they’re integrated into larger systems. Nuclear fusion companies are using Nominal to understand when their parts might fail due to heat.

“The products we’ve built are transferrable,” Hoch says. “It doesn’t matter if you’re building a nuclear fusion reactor or a satellite, those teams can benefit from the Nominal tool chain.”

Ultimately the founders believe the platform helps create better products by enabling a data-driven, iterative design process more commonly seen in the software development industry.

“The concept of continuous integration and development in software revolutionized the industry 20 years ago. Before that, it was common to build software in large, slow batches – developing for months, then testing and releasing all at once,” Strauss explains. “We’re bringing continuous testing to hardware. It’s about constantly creating that feedback loop to improve performance. It’s a new paradigm for how hardware is built. We’ve seen companies like SpaceX do this well to move faster and outpace the competition. Now, that approach is available to everyone.”


MIx helps innovators tackle challenges in national security

Mission Innovation x creates education and research opportunities while facilitating connections between defense agencies and MIT innovators.


Startups and government defense agencies have historically seemed like polar opposites. Startups thrive on speed and risk, while defense agencies are more cautious. Over the past few years, however, things have changed. Many startups are eager to work with these organizations, which are always looking for innovative solutions to their hardest problems.

To help bridge that gap while advancing research along the way, MIT Lecturer Gene Keselman launched MIT’s Mission Innovation X (MIx) along with Sertac Karaman, a professor in the MIT Department of Aeronautics and Astronautics, and Fiona Murray, the William Porter Professor of Entrepreneurship at the MIT Sloan School of Management. MIx develops educational programming, supports research at MIT, and facilitates connections among government organizations, startups, and researchers.

“Startups know how to commercialize their tech, but they don’t necessarily know how to work with the government, and especially how to understand the needs of defense customers,” explains MIx Senior Program Manager Keenan Blatt. “There are a lot of different challenges when it comes to engaging with defense, not only from a procurement cycle and timeline perspective, but also from a culture perspective.”

MIx’s work helps innovators secure crucial early funding while giving defense agencies access to cutting-edge technologies, boosting America’s security capabilities in the process. Through the work, MIx has also become a thought leader in the emerging “dual-use” space, in which researchers and founders make strategic choices to advance technologies that have both civilian and defense applications.

Gene Keselman, the executive director of MIx as well as managing director of MIT’s venture studio Proto Ventures and a colonel in the U.S. Air Force Reserve, believes MIT is uniquely positioned to deliver on MIx’s mission.

“It’s not a coincidence MIx is happening at MIT,” says Keselman, adding that supporting national security “is part of MIT’s ethos.”

A history of service

MIx’s work has deep roots at the Institute.

“MIT has worked with the Department of Defense since at least since the 1940s, but really going back to its founding years,” says Karaman, who is also the director of MIT’s Laboratory for Information and Decision Systems (LIDS), a research group with its own long history of working with the government.

“The difference today,” adds Murray, who teaches courses on building deep tech ventures and regional innovation ecosystems and is the vice chair of NATO's Innovation Fund, “is that defense departments and others looking to support the defense, security, and resilience agenda are looking to several innovation ecosystem stakeholders — universities, startup ventures, and venture capitalists — for solutions. Not only from the large prime contractors.  We have learned this lesson from Ukraine, but the same ecosystem logic is at the core of our MIx offer.”

MIx was borne out of the MIT Innovation Initiative in response to interest Keselman saw from researchers and defense officials in expanding MIT’s work with the defense and global security communities. About seven years ago, he hired Katie Person, who left MIT last year to become a battalion commander, to handle all that interest as a program manager with the initiative. MIx activities, like mentoring and educating founders, began shortly after, and MIx officially launched at MIT in 2021.

“It was a good example of the ways in which MIT responds to its students’ interests and external demand,” Keselman says.

One source of early interest was from startup founders who wanted to know how to work with the defense industry and commercialize technology that could have dual commercial and defense applications. That led the team to launch the Dual Use Ventures course, which helps startup founders and other innovators work with defense agencies. The course has since been offered annually during MIT’s Independent Activities Period (IAP) and tailored for NATO’s Defense Innovation Accelerator for the North Atlantic (DIANA).

Personnel from agencies including U.S. Special Operations Command were also interested in working with MIT students, which led the MIx team to develop course 15.362/6.9160 (Engineering Innovation: Global Security Systems), which is taken each spring by students across MIT and Harvard University.

“There are the government organizations that want to be more innovative and work with startups, and there are startups that want to get access to funding from government and have government as a customer,” Keselman says. “We’re kind of the middle layer, facilitating connections, educating, and partnering on research.”

MIx research activities give student and graduate researchers opportunities to work on pressing problems in the real world, and the MIT community has responded eagerly: More than 150 students applied for MIx’s openings in this summer’s Undergraduate Research Opportunities Program.

"We’re helping push the boundaries of what’s possible and explore the frontiers of technology, but do it in a way that is publishable," says MIx Head Research Scientist A.J. Perez ’13, MEng ’14, PhD ’23. “More broadly, we want to unlock as much support for students and researchers at MIT as possible to work on problems that we know matter to defense agencies.”

Early wins

Some of MIx’s most impactful research so far has come in partnership with startups. For example, MIx helped the startup Picogrid secure a small business grant from the U.S. Air Force to build an early wildfire detection system. As part of the grant, MIT students built a computer vision model for Picogrid’s devices that can detect smoke in the sky, proving the technical feasibility of the system and describing a promising new pathway in the field of machine learning.

In another recent project with the MIT alumni-founded startup Nominal, MIT students helped improve and automate post-flight data analysis for the U.S. Air Force’s Test Pilot School.

MIx’s work connecting MIT’s innovators and the wider innovation ecosystem with defense agencies has already begun to bear fruit, and many members of MIx believe early collaborations are a sign of things to come.

“We haven’t even scratched the surface of the potential for MIx,” says Karaman, “This could be the start of something much bigger.”


LLMs factor in unrelated information when recommending medical treatments

Researchers find nonclinical information in patient messages — like typos, extra white space, and colorful language — reduces the accuracy of an AI model.


A large language model (LLM) deployed to make treatment recommendations can be tripped up by nonclinical information in patient messages, like typos, extra white space, missing gender markers, or the use of uncertain, dramatic, and informal language, according to a study by MIT researchers.

They found that making stylistic or grammatical changes to messages increases the likelihood an LLM will recommend that a patient self-manage their reported health condition rather than come in for an appointment, even when that patient should seek medical care.

Their analysis also revealed that these nonclinical variations in text, which mimic how people really communicate, are more likely to change a model’s treatment recommendations for female patients, resulting in a higher percentage of women who were erroneously advised not to seek medical care, according to human doctors.

This work “is strong evidence that models must be audited before use in health care — which is a setting where they are already in use,” says Marzyeh Ghassemi, an associate professor in the MIT Department of Electrical Engineering and Computer Science (EECS), a member of the Institute of Medical Engineering Sciences and the Laboratory for Information and Decision Systems, and senior author of the study.

These findings indicate that LLMs take nonclinical information into account for clinical decision-making in previously unknown ways. It brings to light the need for more rigorous studies of LLMs before they are deployed for high-stakes applications like making treatment recommendations, the researchers say.

“These models are often trained and tested on medical exam questions but then used in tasks that are pretty far from that, like evaluating the severity of a clinical case. There is still so much about LLMs that we don’t know,” adds Abinitha Gourabathina, an EECS graduate student and lead author of the study.

They are joined on the paper, which will be presented at the ACM Conference on Fairness, Accountability, and Transparency, by graduate student Eileen Pan and postdoc Walter Gerych.

Mixed messages

Large language models like OpenAI’s GPT-4 are being used to draft clinical notes and triage patient messages in health care facilities around the globe, in an effort to streamline some tasks to help overburdened clinicians.

A growing body of work has explored the clinical reasoning capabilities of LLMs, especially from a fairness point of view, but few studies have evaluated how nonclinical information affects a model’s judgment.

Interested in how gender impacts LLM reasoning, Gourabathina ran experiments where she swapped the gender cues in patient notes. She was surprised that formatting errors in the prompts, like extra white space, caused meaningful changes in the LLM responses.

To explore this problem, the researchers designed a study in which they altered the model’s input data by swapping or removing gender markers, adding colorful or uncertain language, or inserting extra space and typos into patient messages.

Each perturbation was designed to mimic text that might be written by someone in a vulnerable patient population, based on psychosocial research into how people communicate with clinicians.

For instance, extra spaces and typos simulate the writing of patients with limited English proficiency or those with less technological aptitude, and the addition of uncertain language represents patients with health anxiety.

“The medical datasets these models are trained on are usually cleaned and structured, and not a very realistic reflection of the patient population. We wanted to see how these very realistic changes in text could impact downstream use cases,” Gourabathina says.

They used an LLM to create perturbed copies of thousands of patient notes while ensuring the text changes were minimal and preserved all clinical data, such as medication and previous diagnosis. Then they evaluated four LLMs, including the large, commercial model GPT-4 and a smaller LLM built specifically for medical settings.

They prompted each LLM with three questions based on the patient note: Should the patient manage at home, should the patient come in for a clinic visit, and should a medical resource be allocated to the patient, like a lab test.

The researchers compared the LLM recommendations to real clinical responses.

Inconsistent recommendations

They saw inconsistencies in treatment recommendations and significant disagreement among the LLMs when they were fed perturbed data. Across the board, the LLMs exhibited a 7 to 9 percent increase in self-management suggestions for all nine types of altered patient messages.

This means LLMs were more likely to recommend that patients not seek medical care when messages contained typos or gender-neutral pronouns, for instance. The use of colorful language, like slang or dramatic expressions, had the biggest impact.

They also found that models made about 7 percent more errors for female patients and were more likely to recommend that female patients self-manage at home, even when the researchers removed all gender cues from the clinical context.

Many of the worst results, like patients told to self-manage when they have a serious medical condition, likely wouldn’t be captured by tests that focus on the models’ overall clinical accuracy.

“In research, we tend to look at aggregated statistics, but there are a lot of things that are lost in translation. We need to look at the direction in which these errors are occurring — not recommending visitation when you should is much more harmful than doing the opposite,” Gourabathina says.

The inconsistencies caused by nonclinical language become even more pronounced in conversational settings where an LLM interacts with a patient, which is a common use case for patient-facing chatbots.

But in follow-up work, the researchers found that these same changes in patient messages don’t affect the accuracy of human clinicians.

“In our follow up work under review, we further find that large language models are fragile to changes that human clinicians are not,” Ghassemi says. “This is perhaps unsurprising — LLMs were not designed to prioritize patient medical care. LLMs are flexible and performant enough on average that we might think this is a good use case. But we don’t want to optimize a health care system that only works well for patients in specific groups.”

The researchers want to expand on this work by designing natural language perturbations that capture other vulnerable populations and better mimic real messages. They also want to explore how LLMs infer gender from clinical text.