General news from the MIT - Massachusetts Institute of Technology University

Here you find the recent daily general news of the the MIT - Massachusetts Institute of Technology University

MIT News
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
How climate change will impact outdoor activities in the US

Using the concept of “outdoor days,” a study shows how global warming will affect people’s ability to work or enjoy recreation outdoors.


It can be hard to connect a certain amount of average global warming with one’s everyday experience, so researchers at MIT have devised a different approach to quantifying the direct impact of climate change. Instead of focusing on global averages, they came up with the concept of “outdoor days”: the number days per year in a given location when the temperature is not too hot or cold to enjoy normal outdoor activities, such as going for a walk, playing sports, working in the garden, or dining outdoors.

In a study published earlier this year, the researchers applied this method to compare the impact of global climate change on different countries around the world, showing that much of the global south would suffer major losses in the number of outdoor days, while some northern countries could see a slight increase. Now, they have applied the same approach to comparing the outcomes for different parts of the United States, dividing the country into nine climatic regions, and finding similar results: Some states, especially Florida and other parts of the Southeast, should see a significant drop in outdoor days, while some, especially in the Northwest, should see a slight increase.

The researchers also looked at correlations between economic activity, such as tourism trends, and changing climate conditions, and examined how numbers of outdoor days could result in significant social and economic impacts. Florida’s economy, for example, is highly dependent on tourism and on people moving there for its pleasant climate; a major drop in days when it is comfortable to spend time outdoors could make the state less of a draw.

The new findings were published this month in the journal Geophysical Research Letters, in a paper by researchers Yeon-Woo Choi and Muhammad Khalifa and professor of civil and environmental engineering Elfatih Eltahir.

“This is something very new in our attempt to understand impacts of climate change impact, in addition to the changing extremes,” Choi says. It allows people to see how these global changes may impact them on a very personal level, as opposed to focusing on global temperature changes or on extreme events such as powerful hurricanes or increased wildfires. “To the best of my knowledge, nobody else takes this same approach” in quantifying the local impacts of climate change, he says. “I hope that many others will parallel our approach to better understand how climate may affect our daily lives.”

The study looked at two different climate scenarios — one where maximum efforts are made to curb global emissions of greenhouse gases and one “worst case” scenario where little is done and global warming continues to accelerate. They used these two scenarios with every available global climate model, 32 in all, and the results were broadly consistent across all 32 models.

The reality may lie somewhere in between the two extremes that were modeled, Eltahir suggests. “I don’t think we’re going to act as aggressively” as the low-emissions scenarios suggest, he says, “and we may not be as careless” as the high-emissions scenario. “Maybe the reality will emerge in the middle, toward the end of the century,” he says.

The team looked at the difference in temperatures and other conditions over various ranges of decades. The data already showed some slight differences in outdoor days from the 1961-1990 period compared to 1991-2020. The researchers then compared these most recent 30 years with the last 30 years of this century, as projected by the models, and found much greater differences ahead for some regions. The strongest effects in the modeling were seen in the Southeastern states. “It seems like climate change is going to have a significant impact on the Southeast in terms of reducing the number of outdoor days,” Eltahir says, “with implications for the quality of life of the population, and also for the attractiveness of tourism and for people who want to retire there.”

He adds that “surprisingly, one of the regions that would benefit a little bit is the Northwest.” But the gain there is modest: an increase of about 14 percent in outdoor days projected for the last three decades of this century, compared to the period from 1976 to 2005. The Southwestern U.S., by comparison, faces an average loss of 23 percent of their outdoor days.

The study also digs into the relationship between climate and economic activity by looking at tourism trends from U.S. National Park Service visitation data, and how that aligned with differences in climate conditions. “Accounting for seasonal variations, we find a clear connection between the number of outdoor days and the number of tourist visits in the United States,” Choi says.

For much of the country, there will be little overall change in the total number of annual outdoor days, the study found, but the seasonal pattern of those days could change significantly. While most parts of the country now see the most outdoor days in summertime, that will shift as summers get hotter, and spring and fall will become the preferred seasons for outdoor activity.

In a way, Eltahir says, “what we are talking about that will happen in the future [for most of the country] is already happening in Florida.” There, he says, “the really enjoyable time of year is in the spring and fall, and summer is not the best time of year.”

People’s level of comfort with temperatures varies somewhat among individuals and among regions, so the researchers designed a tool, now freely available online, that allows people to set their own definitions of the lowest and highest temperatures they consider suitable for outdoor activities, and then see what the climate models predict would be the change in the number of outdoor days for their location, using their own standards of comfort. For their study, they used a widely accepted range of 10 degrees Celsius (50 degrees Fahrenheit) to 25 C (77 F), which is the “thermoneutral zone” in which the human body does not require either metabolic heat generation or evaporative cooling to maintain its core temperature — in other words, in that range there is generally no need to either shiver or sweat.

The model mainly focuses on temperature but also allows people to include humidity or precipitation in their definition of what constitutes a comfortable outdoor day. The model could be extended to incorporate other variables such as air quality, but the researchers say temperature tends to be the major determinant of comfort for most people.

Using their software tool, “If you disagree with how we define an outdoor day, you could define one for yourself, and then you’ll see what the impacts of that are on your number of outdoor days and their seasonality,” Eltahir says.

This work was inspired by the realization, he says, that “people’s understanding of climate change is based on the assumption that climate change is something that’s going to happen sometime in the future and going to happen to someone else. It’s not going to impact them directly. And I think that contributes to the fact that we are not doing enough.”

Instead, the concept of outdoor days “brings the concept of climate change home, brings it to personal everyday activities,” he says. “I hope that people will find that useful to bridge that gap, and provide a better understanding and appreciation of the problem. And hopefully that would help lead to sound policies that are based on science, regarding climate change.”

The research was based on work supported by the Community Jameel for Jameel Observatory CREWSnet and Abdul Latif Jameel Water and Food Systems Lab at MIT.


Making it easier to verify an AI model’s responses

By allowing users to clearly see data referenced by a large language model, this tool speeds manual validation to help users spot AI errors.


Despite their impressive capabilities, large language models are far from perfect. These artificial intelligence models sometimes “hallucinate” by generating incorrect or unsupported information in response to a query.

Due to this hallucination problem, an LLM’s responses are often verified by human fact-checkers, especially if a model is deployed in a high-stakes setting like health care or finance. However, validation processes typically require people to read through long documents cited by the model, a task so onerous and error-prone it may prevent some users from deploying generative AI models in the first place.

To help human validators, MIT researchers created a user-friendly system that enables people to verify an LLM’s responses much more quickly. With this tool, called SymGen, an LLM generates responses with citations that point directly to the place in a source document, such as a given cell in a database.

Users hover over highlighted portions of its text response to see data the model used to generate that specific word or phrase. At the same time, the unhighlighted portions show users which phrases need additional attention to check and verify.

“We give people the ability to selectively focus on parts of the text they need to be more worried about. In the end, SymGen can give people higher confidence in a model’s responses because they can easily take a closer look to ensure that the information is verified,” says Shannon Shen, an electrical engineering and computer science graduate student and co-lead author of a paper on SymGen.

Through a user study, Shen and his collaborators found that SymGen sped up verification time by about 20 percent, compared to manual procedures. By making it faster and easier for humans to validate model outputs, SymGen could help people identify errors in LLMs deployed in a variety of real-world situations, from generating clinical notes to summarizing financial market reports.

Shen is joined on the paper by co-lead author and fellow EECS graduate student Lucas Torroba Hennigen; EECS graduate student Aniruddha “Ani” Nrusimha; Bernhard Gapp, president of the Good Data Initiative; and senior authors David Sontag, a professor of EECS, a member of the MIT Jameel Clinic, and the leader of the Clinical Machine Learning Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and Yoon Kim, an assistant professor of EECS and a member of CSAIL. The research was recently presented at the Conference on Language Modeling.

Symbolic references

To aid in validation, many LLMs are designed to generate citations, which point to external documents, along with their language-based responses so users can check them. However, these verification systems are usually designed as an afterthought, without considering the effort it takes for people to sift through numerous citations, Shen says.

“Generative AI is intended to reduce the user’s time to complete a task. If you need to spend hours reading through all these documents to verify the model is saying something reasonable, then it’s less helpful to have the generations in practice,” Shen says.

The researchers approached the validation problem from the perspective of the humans who will do the work.

A SymGen user first provides the LLM with data it can reference in its response, such as a table that contains statistics from a basketball game. Then, rather than immediately asking the model to complete a task, like generating a game summary from those data, the researchers perform an intermediate step. They prompt the model to generate its response in a symbolic form.

With this prompt, every time the model wants to cite words in its response, it must write the specific cell from the data table that contains the information it is referencing. For instance, if the model wants to cite the phrase “Portland Trailblazers” in its response, it would replace that text with the cell name in the data table that contains those words.

“Because we have this intermediate step that has the text in a symbolic format, we are able to have really fine-grained references. We can say, for every single span of text in the output, this is exactly where in the data it corresponds to,” Torroba Hennigen says.

SymGen then resolves each reference using a rule-based tool that copies the corresponding text from the data table into the model’s response.

“This way, we know it is a verbatim copy, so we know there will not be any errors in the part of the text that corresponds to the actual data variable,” Shen adds.

Streamlining validation

The model can create symbolic responses because of how it is trained. Large language models are fed reams of data from the internet, and some data are recorded in “placeholder format” where codes replace actual values.

When SymGen prompts the model to generate a symbolic response, it uses a similar structure.

“We design the prompt in a specific way to draw on the LLM’s capabilities,” Shen adds.

During a user study, the majority of participants said SymGen made it easier to verify LLM-generated text. They could validate the model’s responses about 20 percent faster than if they used standard methods.

However, SymGen is limited by the quality of the source data. The LLM could cite an incorrect variable, and a human verifier may be none-the-wiser.

In addition, the user must have source data in a structured format, like a table, to feed into SymGen. Right now, the system only works with tabular data.

Moving forward, the researchers are enhancing SymGen so it can handle arbitrary text and other forms of data. With that capability, it could help validate portions of AI-generated legal document summaries, for instance. They also plan to test SymGen with physicians to study how it could identify errors in AI-generated clinical summaries.

This work is funded, in part, by Liberty Mutual and the MIT Quest for Intelligence Initiative.


How cfDNA testing has changed prenatal care

The noninvasive screening procedure can reduce pregnancy risks and lower costs at the same time, but only when targeted effectively.


The much-touted arrival of “precision medicine” promises tailored technologies that help individuals and may also reduce health care costs. New research shows how pregnancy screening can meet both of these objectives, but the findings also highlight how precision medicine must be matched well with patients to save money.

The study involves cfDNA screenings, a type of blood test that can reveal conditions based on chromosomal variation, such as Down Syndrome. For many pregnant women, though not all, cfDNA screenings can be an alternative to amniocentesis or chorionic villus sampling (CVS) — invasive procedures that come with a risk of miscarriage.

In examining how widely cfDNA tests should be used, the study reached a striking conclusion.

“What we find is the highest value for the cfDNA testing comes from people who are high risk, but not extraordinarily high risk,” says Amy Finkelstein, an MIT economist and co-author of a newly published paper detailing the study.

The paper, “Targeting Precision Medicine: Evidence from Prenatal Screening,” appears in the Journal of Political Economy. The co-authors are Peter Conner, an associate professor and senior consultant at Karolinska University Hospital in Sweden; Liran Einav, a professor of economics at Stanford University; Finkelstein, the John and Jennie S. MacDonald Professor of Economics at MIT; and Petra Persson, an assistant professor of economics at Stanford University.

“There is a lot of hope attached to precision medicine,” Persson says. “We can do a lot of new things and tailor health care treatments to patients, which holds a lot of promise. In this paper, we highlight that while this is all true, there are also significant costs in the personalization of medicine. As a society, we may want to examine how to use these technologies while keeping an eye on health care costs.”

Measuring the benefit to “middle-risk” patients

To conduct the study, the research team looked at the introduction of cfDNA screening in Sweden, during the period from 2011 to 2019, with data covering over 230,000 pregnancies. As it happens, there were also regional discrepancies in the extent to which cfDNA screenings were covered by Swedish health care, for patients not already committed to having invasive testing. Some regions covered cfDNA testing quite widely, for all patients with a “moderate” assessed risk or higher; other regions, by contrast, restricted coverage to a subset of patients within that group with elevated risk profiles. This provided variation the researchers could use when conducting their analysis.

With the most generous coverage of cfDNA testing, the procedure was used by 86 percent of patients; with more targeted coverage, that figure dropped to about 33 percent. In both cases, the amount of invasive testing, including amniocentesis, dropped significantly, to about 5 percent. (The cfDNA screenings are very informative, but not fully conclusive, which invasive testing is, so some pregnant women will opt-for a follow-up procedure.)

Both approaches, then, yielded similar reductions in the rate of invasive testing. But due to the costs of cfDNA tests, the economic implications are quite different. Introducing wide coverage of cfDNA tests would raise overall medical costs by about $250 per pregnancy, the study estimates. In contrast, introducing cfDNA with more targeted coverage yields a reduction of about $89 per patient.

Ultimately, the larger dynamics are clear. Pregnant women who have the highest risk of bearing children with chromosome-based conditions are likely to still opt for an invasive test like amniocentesis. Those with virtually no risk may not even have cfDNA tests done. For a group in between, cfDNA tests have a substantial medical value, relieving them of the need for an invasive test. And narrowing the group of patients getting cfDNA tests lowers the overall cost.

“People who are very high-risk are often going to use the invasive test, which is definitive, regardless of whether they have a cfDNA screen or not,” Finkelstein says. “But for middle-risk people, covering cfDNA produces a big increase in cfDNA testing, and that produces a big decline in the rates of the riskier, and more expensive, invasive test.”

How precise?

In turn, the study’s findings raise a larger point. Precision medicine, in almost any form, will add expenses to medical care. Therefore developing some precision about who receives it is significant.

“The allure of precision medicine is targeting people who need it, so we don’t do expensive and potentially unpleasant tests and treatments of people who don’t need them,” Finkelstein says. “Which sounds great, but it kicks the can down the road. You still need to figure out who is a candidate for which kind of precision medicine.”

Therefore, in medicine, instead of just throwing technology at the problem, we may want to aim carefully, where evidence warrants it. Overall, that means good precision medicine builds on good policy analysis, not just good technology.

“Sometimes when we think medical technology has an impact, we simply ask if the technology raises or lowers health care costs, or if it makes patients healthier,” Persson observes. “An important insight from our work, I think, is that the answers are not just about the technology. It’s about the pairing of technology and policy because policy is going to influence the impact of technology on health care and patient outcomes. We see this clearly in our study.”

In this case, finding comparable patient outcomes with narrower cfDNA screenings suggests one way of targeting diagnostic procedures. And across many possible medical situations, finding the subset of people for whom a technology is most likely to yield new and actionable information seems a promising objective.

“The benefit is not just an innate feature of the testing,” Finkelstein says. “With diagnostic technologies, the value of information is greatest when you’re neither obviously appropriate or inappropriate for the next treatment. It’s really the non-monotone value of information that’s interesting.”

The study was supported, in part, by the U.S. National Science Foundation.


A new framework to efficiently screen drugs

Novel method to scale phenotypic drug screening drastically reduces the number of input samples, costs, and labor required to execute a screen.


Some of the most widely used drugs today, including penicillin, were discovered through a process called phenotypic screening. Using this method, scientists are essentially throwing drugs at a problem — for example, when attempting to stop bacterial growth or fixing a cellular defect — and then observing what happens next, without necessarily first knowing how the drug works. Perhaps surprisingly, historical data show that this approach is better at yielding approved medicines than those investigations that more narrowly focus on specific molecular targets.

But many scientists believe that properly setting up the problem is the true key to success. Certain microbial infections or genetic disorders caused by single mutations are much simpler to prototype than complex diseases like cancer. These require intricate biological models that are far harder to make or acquire. The result is a bottleneck in the number of drugs that can be tested, and thus the usefulness of phenotypic screening.

Now, a team of scientists led by the Shalek Lab at MIT has developed a promising new way to address the difficulty of applying phenotyping screening to scale. Their method allows researchers to simultaneously apply multiple drugs to a biological problem at once, and then computationally work backward to figure out the individual effects of each. For instance, when the team applied this method to models of pancreatic cancer and human immune cells, they were able to uncover surprising new biological insights, while also minimizing cost and sample requirements by several-fold — solving a few problems in scientific research at once.

Zev Gartner, a professor in pharmaceutical chemistry at the University of California at San Francisco, says this new method has great potential. “I think if there is a strong phenotype one is interested in, this will be a very powerful approach,” Gartner says.

The research was published Oct. 8 in Nature Biotechnology. It was led by Ivy Liu, Walaa Kattan, Benjamin Mead, Conner Kummerlowe, and Alex K. Shalek, the director of the Institute for Medical Engineering and Sciences (IMES) and the Health Innovation Hub at MIT, as well as the J. W. Kieckhefer Professor in IMES and the Department of Chemistry. It was supported by the National Institutes of Health and the Bill and Melinda Gates Foundation.

A “crazy” way to increase scale

Technological advances over the past decade have revolutionized our understanding of the inner lives of individual cells, setting the stage for richer phenotypic screens. However, many challenges remain.

For one, biologically representative models like organoids and primary tissues are only available in limited quantities. The most informative tests, like single-cell RNA sequencing, are also expensive, time-consuming, and labor-intensive.

That’s why the team decided to test out the “bold, maybe even crazy idea” to mix everything together, says Liu, a PhD student in the MIT Computational and Systems Biology program. In other words, they chose to combine many perturbations — things like drugs, chemical molecules, or biological compounds made by cells — into one single concoction, and then try to decipher their individual effects afterward.

They began testing their workflow by making different combinations of 316 U.S. Food and Drug Administration-approved drugs. “It’s a high bar: basically, the worst-case scenario,” says Liu. “Since every drug is known to have a strong effect, the signals could have been impossible to disentangle.”

These random combinations ranged from three to 80 drugs per pool, each of which was applied to lab-grown cells. The team then tried to understand the effects of the individual drug using a linear computational model.

It was a success. When compared with traditional tests for each individual drug, the new method yielded comparable results, successfully finding the strongest drugs and their respective effects in each pool, at a fraction of the cost, samples, and effort.

Putting it into practice

To test the method’s applicability to address real-world health challenges, the team then approached two problems that were previously unimaginable with past phenotypic screening techniques.

The first test focused on pancreatic ductal adenocarcinoma (PDAC), one of the deadliest types of cancer. In PDAC, many types of signals come from the surrounding cells in the tumor's environment. These signals can influence how the tumor progresses and responds to treatments. So, the team wanted to identify the most important ones.

Using their new method to pool different signals in parallel, they found several surprise candidates. “We never could have predicted some of our hits,” says Shalek. These included two previously overlooked cytokines that actually could predict survival outcomes of patients with PDAC in public cancer data sets.

The second test looked at the effects of 90 drugs on adjusting the immune system’s function. These drugs were applied to fresh human blood cells, which contain a complex mix of different types of immune cells. Using their new method and single-cell RNA-sequencing, the team could not only test a large library of drugs, but also separate the drugs’ effects out for each type of cell. This enabled the team to understand how each drug might work in a more complex tissue, and then select the best one for the job.

“We might say there’s a defect in a T cell, so we’re going to add this drug, but we never think about, well, what does that drug do to all of the other cells in the tissue?” says Shalek. “We now have a way to gather this information, so that we can begin to pick drugs to maximize on-target effects and minimize side effects.”

Together, these experiments also showed Shalek the need to build better tools and datasets for creating hypotheses about potential treatments. “The complexity and lack of predictability for the responses we saw tells me that we likely are not finding the right, or most effective, drugs in many instances,” says Shalek.

Reducing barriers and improving lives

Although the current compression technique can identify the perturbations with the greatest effects, it’s still unable to perfectly resolve the effects of each one. Therefore, the team recommends that it act as a supplement to support additional screening. “Traditional tests that examine the top hits should follow,” Liu says.

Importantly, however, the new compression framework drastically reduces the number of input samples, costs, and labor required to execute a screen. With fewer barriers in play, it marks an exciting advance for understanding complex responses in different cells and building new models for precision medicine.

Shalek says, “This is really an incredible approach that opens up the kinds of things that we can do to find the right targets, or the right drugs, to use to improve lives for patients.”


How is the world watching the 2024 US election?

At a recent Starr Forum, scholars gathered to discuss the global perception of the upcoming presidential election and the influence of American politics.


No matter the outcome, the results of the 2024 United States presidential election are certain to have global impact. How are citizens and leaders in other parts of the world viewing this election? What’s at stake for their countries and regions?

This was the focus of “The 2024 US Presidential Election: The World is Watching,” a Starr Forum held earlier this month on the MIT campus.

The Starr Forum is a public event series hosted by MIT’s Center for International Studies (CIS), and focused on leading issues of global interest. The event was moderated by Evan Lieberman, director of CIS and the Total Professor of Political Science and Contemporary Africa.

Experts in African, Asian, European, and Latin American politics assembled to share ideas with one another and the audience.

Each offered informed commentary on their respective regions, situating their observations within several contexts including the countries’ style of government, residents’ perceptions of American democratic norms, and America’s stature in the eyes of those countries’ populations.

Perceptions of U.S. politics from across the globe

Katrina Burgess, professor of political economy at Tufts University and the director of the Henry J. Leir Institute of Migration and Human Security, sought to distinguish the multiple political identities of members of the Latin American diaspora in America and their perceptions of America’s relationship with their countries.

“American democracy is no longer perceived as a standard bearer,” Burgess said. “While members of these communities see advantages in aligning themselves with one of the presidential candidates because of positions on economic relations, immigration, and border security, others have deeply-held views on fossil fuels and increased access to sustainable energy solutions.”

Prerna Singh, Brown University’s Mahatma Gandhi Professor of Political Science and International Studies, spoke about India’s status as the world’s largest democracy and described a country moving away from democratic norms.

“Indian leaders don’t confer with the press,” she said. “Indian leaders don’t debate like Americans.”

The ethnically and linguistically diverse India, Singh noted, has elected several women to its highest government posts, while the United States has yet to elect one. She described a brand of “exclusionary nationalism” that threatened to move India away from democracy and toward something like authoritarian rule. 

John Githongo, the Robert E. Wilhelm Fellow at CIS for 2024-25, shared his findings on African countries’ views of the 2024 election.

“America’s soft power infrastructure in Africa is crumbling,” said Githongo, a Kenyan native. “Chinese investment in Africa is up significantly and China is seen by many as an ideal political and economic partner.”

Youth-led protests in Kenya, Githongo noted, occurred in response to a failure of promised democratic reforms. He cautioned against a potential return to a pre-Cold War posture in Africa, noting that the Biden administration was the first in some time to attempt to reestablish economic and political ties with African countries.

Daniel Ziblatt, the Eaton Professor of Government at Harvard University and the director of the Minda de Gunzburg Center for European Studies, described shifting political winds in Europe that appear similar to increased right-wing extremism and a brand of populist agitation being observed in America.

“We see the rise of the radical, antidemocratic right in Europe and it looks like shifts we’ve observed in the U.S.,” he noted. “Trump supporters in Germany, Poland, and Hungary are increasingly vocal.”

Ziblatt acknowledged the divisions in the historical transatlantic relationship between Europe and America as symptoms of broader challenges. Russia’s invasion of Ukraine, energy supply issues, and national security apparatuses dependent on American support may continue to cause political ripples, he added.

Does America still have global influence?

Following each of their presentations, the guest speakers engaged in a conversation, taking questions from the audience. There was agreement among panelists that there’s less investment globally in the outcome of the U.S. election than may have been observed in past elections.

Singh noted that, from the perspective of the Indian media, India has bigger fish to fry.

Panelists diverged, however, when asked about the rise of political polarization and its connection with behaviors observed in American circles.

“This trend is global,” Burgess asserted. “There’s no causal relationship between American phenomena and other countries’ perceptions.”

“I think they’re learning from each other,” Ziblatt countered when asked about extremist elements in America and Europe. “There’s power in saying outrageous things.”

Githongo asserted a kind of “trickle-down” was at work in some African countries.

“Countries with right-leaning governments see those inclinations make their way to organizations like evangelical Christians,” he said. “Their influence mirrors the rise of right-wing ideology in other African countries and in America.”

Singh likened the continued splintering of American audiences to India’s caste system.

“I think where caste comes in is with the Indian diaspora,” she said. “Indian-American business and tech leaders tend to hail from high castes.” These leaders, she said, have outsized influence in their American communities and in India.


Astronomers detect ancient lonely quasars with murky origins

The quasars appear to have few cosmic neighbors, raising questions about how they first emerged more than 13 billion years ago.


A quasar is the extremely bright core of a galaxy that hosts an active supermassive black hole at its center. As the black hole draws in surrounding gas and dust, it blasts out an enormous amount of energy, making quasars some of the brightest objects in the universe. Quasars have been observed as early as a few hundred million years after the Big Bang, and it’s been a mystery as to how these objects could have grown so bright and massive in such a short amount of cosmic time.

Scientists have proposed that the earliest quasars sprang from overly dense regions of primordial matter, which would also have produced many smaller galaxies in the quasars’ environment. But in a new MIT-led study, astronomers observed some ancient quasars that appear to be surprisingly alone in the early universe.

The astronomers used NASA’s James Webb Space Telescope (JWST) to peer back in time, more than 13 billion years, to study the cosmic surroundings of five known ancient quasars. They found a surprising variety in their neighborhoods, or “quasar fields.” While some quasars reside in very crowded fields with more than 50 neighboring galaxies, as all models predict, the remaining quasars appear to drift in voids, with only a few stray galaxies in their vicinity.

These lonely quasars are challenging physicists’ understanding of how such luminous objects could have formed so early on in the universe, without a significant source of surrounding matter to fuel their black hole growth.

“Contrary to previous belief, we find on average, these quasars are not necessarily in those highest-density regions of the early universe. Some of them seem to be sitting in the middle of nowhere,” says Anna-Christina Eilers, assistant professor of physics at MIT. “It’s difficult to explain how these quasars could have grown so big if they appear to have nothing to feed from.”

There is a possibility that these quasars may not be as solitary as they appear, but are instead surrounded by galaxies that are heavily shrouded in dust and therefore hidden from view. Eilers and her colleagues hope to tune their observations to try and see through any such cosmic dust, in order to understand how quasars grew so big, so fast, in the early universe.

Eilers and her colleagues report their findings in a paper appearing today in the Astrophysical JournalThe MIT co-authors include postdocs Rohan Naidu and Minghao Yue; Robert Simcoe, the Francis Friedman Professor of Physics and director of MIT’s Kavli Institute for Astrophysics and Space Research; and collaborators from institutions including Leiden University, the University of California at Santa Barbara, ETH Zurich, and elsewhere.

Galactic neighbors

The five newly observed quasars are among the oldest quasars observed to date. More than 13 billion years old, the objects are thought to have formed between 600 to 700 million years after the Big Bang. The supermassive black holes powering the quasars are a billion times more massive than the sun, and more than a trillion times brighter. Due to their extreme luminosity, the light from each quasar is able to travel over the age of the universe, far enough to reach JWST’s highly sensitive detectors today.

“It’s just phenomenal that we now have a telescope that can capture light from 13 billion years ago in so much detail,” Eilers says. “For the first time, JWST enabled us to look at the environment of these quasars, where they grew up, and what their neighborhood was like.”

The team analyzed images of the five ancient quasars taken by JWST between August 2022 and June 2023. The observations of each quasar comprised multiple “mosaic” images, or partial views of the quasar’s field, which the team effectively stitched together to produce a complete picture of each quasar’s surrounding neighborhood.

The telescope also took measurements of light in multiple wavelengths across each quasar’s field, which the team then processed to determine whether a given object in the field was light from a neighboring galaxy, and how far a galaxy is from the much more luminous central quasar.

“We found that the only difference between these five quasars is that their environments look so different,” Eilers says. “For instance, one quasar has almost 50 galaxies around it, while another has just two. And both quasars are within the same size, volume, brightness, and time of the universe. That was really surprising to see.”

Growth spurts

The disparity in quasar fields introduces a kink in the standard picture of black hole growth and galaxy formation. According to physicists’ best understanding of how the first objects in the universe emerged, a cosmic web of dark matter should have set the course. Dark matter is an as-yet unknown form of matter that has no other interactions with its surroundings other than through gravity.

Shortly after the Big Bang, the early universe is thought to have formed filaments of dark matter that acted as a sort of gravitational road, attracting gas and dust along its tendrils. In overly dense regions of this web, matter would have accumulated to form more massive objects. And the brightest, most massive early objects, such as quasars, would have formed in the web’s highest-density regions, which would have also churned out many more, smaller galaxies.

“The cosmic web of dark matter is a solid prediction of our cosmological model of the Universe, and it can be described in detail using numerical simulations,” says co-author Elia Pizzati, a graduate student at Leiden University. “By comparing our observations to these simulations, we can determine where in the cosmic web quasars are located.”

Scientists estimate that quasars would have had to grow continuously with very high accretion rates in order to reach the extreme mass and luminosities at the times that astronomers have observed them, fewer than 1 billion years after the Big Bang.

“The main question we’re trying to answer is, how do these billion-solar-mass black holes form at a time when the universe is still really, really young? It’s still in its infancy,” Eilers says.

The team’s findings may raise more questions than answers. The “lonely” quasars appear to live in relatively empty regions of space. If physicists’ cosmological models are correct, these barren regions signify very little dark matter, or starting material for brewing up stars and galaxies. How, then, did extremely bright and massive quasars come to be?

“Our results show that there’s still a significant piece of the puzzle missing of how these supermassive black holes grow,” Eilers says. “If there’s not enough material around for some quasars to be able to grow continuously, that means there must be some other way that they can grow, that we have yet to figure out.”

This research was supported, in part, by the European Research Council. 


Using spatial learning to transform math and science education

PrismsVR, founded by Anurupa Ganguly ’07, MNG ’09, takes students to virtual worlds to learn through experiences and movement.


Legend has it that Isaac Newton was sitting under a tree when an apple fell on his head, sparking a bout of scientific thinking that led to the theory of gravity. It’s one of the most famous stories in science, perhaps because it shows the power of simple human experiences to revolutionize our understanding of the world around us.

About five years ago, Anurupa Ganguly ’07, MNG ’09 noticed kids don’t learn that way in schools.

“Students should learn how to use language, notation, and eventually shorthand representation of thoughts from deeply human experiences,” Ganguly says.

That’s the idea behind PrismsVR. The company offers virtual reality experiences for students, using physical learning to teach core concepts in math and science.

The platform can radically change the dynamics of the classroom, encouraging self-paced, student-led learning, where the teacher is focused on asking the right questions and sparking curiosity.

Instead of learning biology with a pen and paper, students become biomedical researchers designing a tissue regeneration therapy. Instead of learning trigonometry in a textbook, students become rural architects designing a new school building.

“We’re building a whole new learning platform, methodology, and tech infrastructure that allows students to experience problems in the first person, not through abstractions or 2D screens, and then go from that experience to ascribe meaning, language, and build up to equations, procedures, and other nomenclature,” Ganguly explains.

A 3D line chart has lines going up and down in green and red.


Today PrismsVR has been used by about 300,000 students across 35 states. The company’s approach was shown to boost algebra test scores by 11 percent in one study, with larger, multistate studies currently underway through funding from the Gates Foundation.

“Education has been in desperate need of real reform for many years,” Ganguly says. “But what’s happened is we’ve just been digitizing old, antiquated teaching methods instead. We would take a lecture and make it a video, or take a worksheet and make it a web app. I think districts see us taking a more aspirational approach, with multimodal interaction and concepts at the center of learning design, and are collaborating with us to scale that instead. We want to get this to every single public school student across the U.S., and then we’re going into community colleges, higher ed, and international.”

A new paradigm for learning

Ganguly was an undergraduate and master’s student in MIT’s Department of Electrical Engineering and Computer Science. When she began as an undergrad in 2003, she estimates that women made up about 30 percent of her class in the department, but as she advanced in her studies, that number seemed to dwindle.

“It was a disappearing act for some students, and I became inspired to understand what’s happening at the K-12 levels that set some students up for success and led to fragile foundations for others,” Ganguly recalls.

As she neared the end of her graduate program in 2009, Ganguly planned to move to California to take an engineering job. But as she was walking through MIT’s Infinite Corridor one day, a sign caught her eye. It was for Teach for America, which had collaborated with MIT to recruit students into the field of teaching, particularly for high need and high poverty students.

“I was inspired by that idea that I could use my education, engineering background, and disciplined systems thinking to think through systemic change in the public sector,” says Ganguly, who became a high school physics and algebra teacher in the Boston Public Schools.

Ganguly soon left the classroom and became director of math for the district, where she oversaw curriculum and teacher upskilling. From there, Ganguly went to New York City Public Schools, where she also supported curriculum development, trying to relate abstract math concepts to students’ experiences in the real world.

“As I began to travel from school to school, working with millions of kids, I became convinced that we don’t have the tools to solve the problem I thought about at MIT — of truly leveling the playing field and building enduring identities in the mathematical sciences,” Ganguly says.

The problem as Ganguly sees it is that students’ world is 3D, complex, and multimodal. Yet most lessons are confined to paper or tablets. For other things in life, students learn through their complex experiences: through their senses, movement, and emotions. Why should math and science be any different? In 2018, the Oculus Quest VR headset was released, and Ganguly thought she had found a more effective learning medium to scale how we learn.

But starting an education company based on virtual reality at the time was audacious. The 128-gigabyte Quest was priced at $500, and there were no standards-based VR curricula or standalone VR headsets in U.S. K-12 schools.

“Investors weren’t going to touch this,” Ganguly jokes.

Luckily, Ganguly received a small amount of funding from the National Science Foundation to build her first prototype. Ganguly started with Algebra 1; performance in this class is one of the top predictors of lifetime wages but has shown a stubbornly persistent achievement gap.

Her first module, which she built during the pandemic, places students in a food hall when a sudden announcement from the mayor rings out. There’s an alarming growth of an unknown virus in the area. The students get the power to travel back in time to see how the virus is spreading, from one person’s sneeze to many people’s behaviors in a demonstration of multiplicative growth.

The people turn to dots in a simulation as the journey moves to interactive, tactile data visualization, and the students are charged with figuring out how many weeks until the hospitals run out of capacity. Once the learning design for VR was established, Ganguly continued to build experiences across the curriculum in geometry, algebra II and III, biology, chemistry, and middle school subjects. Today Prisms covers all math and science subjects in grades seven to eleven, and the company is currently building out calculus, data science, and statistics for upper and postsecondary school. By the fall of 2025, Prisms will have evergreen content up to grade level 14.

Following the experiences, students gather in small groups to reflect on the lessons and write summaries. As students go through their virtual experiences, teachers have a web dashboard to monitor each child’s progress to support and intervene where needed.

“With our solution, the role of the teacher is to be Socrates and to ask high-quality questions, not deliver knowledge” Ganguly says.

As a solo founder, Ganguly says support from MIT’s Venture Mentoring Service, which offers members of the MIT community startup guidance in the form of “board meetings” led by successful entrepreneurs, was crucial.

“The MIT founder community is different,” Ganguly says. “We’re often technical founders, building for ourselves, and we build our company’s first product. Moving from product to your go-to-market strategy and hiring is a unique journey for product-minded founders.”

From textbooks to experiences

A few years ago, Ganguly’s team was leading a classroom coaching session in a Virginia school district when a teacher told her about a student named Silas.

“The teacher was saying, ‘Silas never does anything, he just sits in the back of class,’” Ganguly recalls. “I’ve seen this like clockwork, so we just said, ‘Let’s give Silas a fresh shot and see what we can do.’ Lo and behold, Silas was the first one to finish the module and write a full synthesis report. The teacher told me that was the first time Silas has turned in an assignment with everything filled in.”

Ganguly says it’s one of thousands of anecdotes she has.

“A lot of students feel shut out of the modern math classroom because of our stubborn approach of drill and kill,” Ganguly says. “Students want to learn through great stories. They want to help people. They want to be empathetic. They want their math education to matter.”

Ganguly sees PrismsVR as a fundamentally new way for students to learn no matter where they are.

“We intend to become the next textbook,” Ganguly says. “The next textbooks will be spatial and experiential.”


MIT linguist Irene Heim shares Schock Prize in Logic and Philosophy

The professor emerita was recognized for her work on natural language interpretation and linguistic expression.


Linguist Irene Heim, professor emerita in MIT’s Department of Linguistics and Philosophy, has been named a co-recipient of the 2024 Rolf Schock Prize in Logic and Philosophy.

Heim shares the award with Hans Kamp, a professor of formal logics and philosophy of language at the University of Stuttgart in Germany. Heim and Kamp are being recognized for their independent work on the “conception and early development of dynamic semantics for natural language.”

The Schock Prize in Logic and Philosophy, sometimes referred to as the Nobel Prize of philosophy, is awarded every three years by the Schock Foundation to distinguished international recipients proposed by the Royal Swedish Academy of Sciences. A prize ceremony and symposium will be held at the Royal Academy of Fine Arts in Stockholm Nov. 11-12. MIT will host a separate event on campus celebrating Heim’s achievement on Dec. 7.

A press release from the Royal Swedish Academy of Sciences explains more about the research for which Heim and Kamp were recognized:

“Natural languages are highly context-dependent — how a sentence is interpreted often depends on the situation, but also on what has been uttered before. In one type of case, a pronoun depends on an earlier phrase in a separate clause. In the mid-1970s, some constructions of this type posed a hard problem for formal semantic theory.

“Around 1980, Hans Kamp and Irene Heim each separately developed similar solutions to this problem. Their theories brought far-reaching changes in the field. Both introduced a new level of representation between the linguistic expression and its worldly interpretation and, in both, this level has a new type of linguistic meaning. Instead of the traditional idea that a clause describes a worldly condition, meaning at this level consists in the way it contributes to updating information. Based on these fundamentally new ideas, the theories provide adequate interpretations of the problematic constructions.”

This is the first time the prize has been awarded for work done in linguistics. The work has had a transformative effect on three major subfields of linguistics: the study of linguistic mental representation (syntax), the study of their logical properties (semantics), and the study of the conditions on the use of linguistic expressions in conversation (pragmatics). Heim has published dozens of texts on semantics and syntax of language.

“I am struck again and again by how our field has progressed in the 50 years since I first entered it and the 40 years since my co-awardee and I contributed the work which won the award,” Heim said. “Those old contributions now look kind of simple-minded, in some spots even confused. But — like other influential ideas in this half-century of linguistics and philosophy of language — they have been influential not just because many people ran with them, but more so because many people picked them apart and explored ever more sophisticated and satisfying alternatives to them.”

Heim, a recognized leader in the fields of syntax and semantics, was born in Germany in 1954. She studied at the University of Konstanz and the Ludwig Maximilian University of Munich, where she earned an MA in philosophy while minoring in linguistics and mathematics. She later earned a PhD in linguistics at the University of Massachusetts at Amherst. She previously taught at the University of Texas at Austin and the University of California Los Angeles before joining MIT’s faculty in 1989. 

“I am proud to think of myself as Irene’s student,” says Danny Fox, linguistics section head and the Anshen-Chomsky Professor of Language and Thought. “Irene’s work has served as the foundation of so many areas of our field, and she is rightfully famous for it. But her influence goes even deeper than that. She has taught generations of researchers, primarily by example, how to think anew about entrenched ideas (including her own contributions), how much there is to gain from careful analysis of theoretical proposals, and at the same time, how not to entirely neglect our ambitious aspirations to move beyond this careful work and think about when it might be appropriate to take substantive risks.”


Combining next-token prediction and video diffusion in computer vision and robotics

A new method can train a neural network to sort corrupted data while anticipating next steps. It can make flexible plans for robots, generate high-quality video, and help AI agents navigate digital environments.


In the current AI zeitgeist, sequence models have skyrocketed in popularity for their ability to analyze data and predict what to do next. For instance, you’ve likely used next-token prediction models like ChatGPT, which anticipate each word (token) in a sequence to form answers to users’ queries. There are also full-sequence diffusion models like Sora, which convert words into dazzling, realistic visuals by successively “denoising” an entire video sequence. 

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have proposed a simple change to the diffusion training scheme that makes this sequence denoising considerably more flexible.

When applied to fields like computer vision and robotics, the next-token and full-sequence diffusion models have capability trade-offs. Next-token models can spit out sequences that vary in length. However, they make these generations while being unaware of desirable states in the far future — such as steering its sequence generation toward a certain goal 10 tokens away — and thus require additional mechanisms for long-horizon (long-term) planning. Diffusion models can perform such future-conditioned sampling, but lack the ability of next-token models to generate variable-length sequences.

Researchers from CSAIL want to combine the strengths of both models, so they created a sequence model training technique called “Diffusion Forcing.” The name comes from “Teacher Forcing,” the conventional training scheme that breaks down full sequence generation into the smaller, easier steps of next-token generation (much like a good teacher simplifying a complex concept).

Diffusion Forcing found common ground between diffusion models and teacher forcing: They both use training schemes that involve predicting masked (noisy) tokens from unmasked ones. In the case of diffusion models, they gradually add noise to data, which can be viewed as fractional masking. The MIT researchers’ Diffusion Forcing method trains neural networks to cleanse a collection of tokens, removing different amounts of noise within each one while simultaneously predicting the next few tokens. The result: a flexible, reliable sequence model that resulted in higher-quality artificial videos and more precise decision-making for robots and AI agents.

By sorting through noisy data and reliably predicting the next steps in a task, Diffusion Forcing can aid a robot in ignoring visual distractions to complete manipulation tasks. It can also generate stable and consistent video sequences and even guide an AI agent through digital mazes. This method could potentially enable household and factory robots to generalize to new tasks and improve AI-generated entertainment.

“Sequence models aim to condition on the known past and predict the unknown future, a type of binary masking. However, masking doesn’t need to be binary,” says lead author, MIT electrical engineering and computer science (EECS) PhD student, and CSAIL member Boyuan Chen. “With Diffusion Forcing, we add different levels of noise to each token, effectively serving as a type of fractional masking. At test time, our system can “unmask” a collection of tokens and diffuse a sequence in the near future at a lower noise level. It knows what to trust within its data to overcome out-of-distribution inputs.”

In several experiments, Diffusion Forcing thrived at ignoring misleading data to execute tasks while anticipating future actions.

When implemented into a robotic arm, for example, it helped swap two toy fruits across three circular mats, a minimal example of a family of long-horizon tasks that require memories. The researchers trained the robot by controlling it from a distance (or teleoperating it) in virtual reality. The robot is trained to mimic the user’s movements from its camera. Despite starting from random positions and seeing distractions like a shopping bag blocking the markers, it placed the objects into its target spots.

To generate videos, they trained Diffusion Forcing on “Minecraft” game play and colorful digital environments created within Google’s DeepMind Lab Simulator. When given a single frame of footage, the method produced more stable, higher-resolution videos than comparable baselines like a Sora-like full-sequence diffusion model and ChatGPT-like next-token models. These approaches created videos that appeared inconsistent, with the latter sometimes failing to generate working video past just 72 frames.

Diffusion Forcing not only generates fancy videos, but can also serve as a motion planner that steers toward desired outcomes or rewards. Thanks to its flexibility, Diffusion Forcing can uniquely generate plans with varying horizon, perform tree search, and incorporate the intuition that the distant future is more uncertain than the near future. In the task of solving a 2D maze, Diffusion Forcing outperformed six baselines by generating faster plans leading to the goal location, indicating that it could be an effective planner for robots in the future.

Across each demo, Diffusion Forcing acted as a full sequence model, a next-token prediction model, or both. According to Chen, this versatile approach could potentially serve as a powerful backbone for a “world model,” an AI system that can simulate the dynamics of the world by training on billions of internet videos. This would allow robots to perform novel tasks by imagining what they need to do based on their surroundings. For example, if you asked a robot to open a door without being trained on how to do it, the model could produce a video that’ll show the machine how to do it.

The team is currently looking to scale up their method to larger datasets and the latest transformer models to improve performance. They intend to broaden their work to build a ChatGPT-like robot brain that helps robots perform tasks in new environments without human demonstration.

“With Diffusion Forcing, we are taking a step to bringing video generation and robotics closer together,” says senior author Vincent Sitzmann, MIT assistant professor and member of CSAIL, where he leads the Scene Representation group. “In the end, we hope that we can use all the knowledge stored in videos on the internet to enable robots to help in everyday life. Many more exciting research challenges remain, like how robots can learn to imitate humans by watching them even when their own bodies are so different from our own!”

Chen and Sitzmann wrote the paper alongside recent MIT visiting researcher Diego Martí Monsó, and CSAIL affiliates: Yilun Du, a EECS graduate student; Max Simchowitz, former postdoc and incoming Carnegie Mellon University assistant professor; and Russ Tedrake, the Toyota Professor of EECS, Aeronautics and Astronautics, and Mechanical Engineering at MIT, vice president of robotics research at the Toyota Research Institute, and CSAIL member. Their work was supported, in part, by the U.S. National Science Foundation, the Singapore Defence Science and Technology Agency, Intelligence Advanced Research Projects Activity via the U.S. Department of the Interior, and the Amazon Science Hub. They will present their research at NeurIPS in December.


Equipping doctors with AI co-pilots

Alumni-founded Ambience Healthcare automates routine tasks for clinicians before, during, and after patient visits.


Most doctors go into medicine because they want to help patients. But today’s health care system requires that doctors spend hours each day on other work — searching through electronic health records (EHRs), writing documentation, coding and billing, prior authorization, and utilization management — often surpassing the time they spend caring for patients. The situation leads to physician burnout, administrative inefficiencies, and worse overall care for patients.

Ambience Healthcare is working to change that with an AI-powered platform that automates routine tasks for clinicians before, during, and after patient visits.

"We build co-pilots to give clinicians AI superpowers," says Ambience CEO Mike Ng MBA ’16, who co-founded the company with Nikhil Buduma ’17. "Our platform is embedded directly into EHRs to free up clinicians to focus on what matters most, which is providing the best possible patient care."

Ambience’s suite of products handles pre-charting and real-time AI scribing, and assists with navigating the thousands of rules to select the right insurance billing codes. The platform can also send after-visit summaries to patients and their families in different languages to keep everyone informed and on the same page.

Ambience is already being used across roughly 40 large institutions such as UCSF Health, the Memorial Hermann Health System, St. Luke’s Health System, John Muir Health, and more. Clinicians leverage Ambience in dozens of languages and more than 100 specialties and subspecialties, in settings like the emergency department, hospital inpatient settings, and the oncology ward.

The founders say clinicians using Ambience save two to three hours per day on documentation, report lower levels of burnout, and develop higher-quality relationships with their patients.

From problem to product to platform

Ng worked in finance until getting an up-close look at the health care system after he fractured his back in 2012. He was initially misdiagnosed and put on the wrong care plan, but he learned a lot about the U.S. health system in the process, including how the majority of clinicians’ days are spent documenting visits, selecting billing codes, and completing other administrative tasks. The average clinician only spends 27 percent of their time on direct patient care.

In 2014, Ng decided to enter the MIT Sloan School of Management. During his first week, he attended the “t=0” celebration of entrepreneurship hosted by the Martin Trust Center for MIT Entrepreneurship, where he met Buduma. The pair became fast friends, and they ended up taking classes together including 15.378 (Building an Entrepreneurial Venture) and 15.392 (Scaling Entrepreneurial Ventures).

“MIT was an incredible training ground to evaluate what makes a great company and learn the foundations of building a successful company,” Ng says.

Buduma had gone through his own journey to discover problems with the health care system. After immigrating to the U.S. from India as a child and battling persistent health issues, he had watched his parents struggle to navigate the U.S. medical system. While completing his bachelor’s degree at MIT, he was also steeped in the AI research community and wrote an early textbook on modern AI and deep learning.

In 2016, Ng and Buduma founded their first company in San Francisco — Remedy Health — which operated its own AI-powered health care platform. In the process of hiring clinicians, taking care of patients, and implementing technology themselves, they developed an even deeper appreciation for the challenges that health care organizations face.

During that time, they also got an inside look at advances in AI. Google’s Chief Scientist Jeff Dean, a major investor in Remedy and now in Ambience, led a research group inside of Google Brain to invent the transformer architecture. Ng and Buduma say they were among the first to put transformers into production to support their own clinicians at Remedy. Subsequently, several of their friends and housemates went on to start the large language model group within OpenAI. Their friends’ work formed the research foundations that ultimately led to ChatGPT.          

“It was very clear that we were at this inflection point where we were going to have this class of general-purpose models that were going to get exponentially better,” Buduma says. “But I think we also noticed a big gap between those general-purpose models versus what actually would be robust enough to work in a clinic. Mike and I decided in 2020 that there should be a team that specifically focused on fine-tuning these models for health care and medicine.”

The founders started Ambience by building an AI-powered scribe that works on phones and laptops to record the details of doctor-patient visits in a HIPAA-compliant system that preserves patient privacy. They quickly saw that the models needed to be fine-tuned for each area of medicine, and they slowly expanded specialty coverage one by one in a multiyear process.

The founders also realized their scribes needed to fit within back-office operations like insurance coding and billing.

“Documentation isn’t just for the clinician, it's also for the revenue cycle team,” Buduma says. “We had to go back and rewrite all of our algorithms to be coding-aware. There are literally tens of thousands of coding rules that change every year and differ by specialty and contract type.”

From there, the founders built out models for clinicians to make referrals and to send comprehensive summaries of visits to patients.

“In most care settings before Ambience, when a patient and their family left the clinic, whatever the patient and their family wrote down was what they remembered from the visit,” Buduma says. “That’s one of the features that physicians love most, because they are trying to create the best experience for patients and their families. By the time that patient is in the parking lot, they already have a really robust, high-quality summary of exactly what you talked about and all the shared decision-making around your visit in their portal.”

Democratizing health care

By improving physician productivity, the founders believe they’re helping the health care system manage a chronic shortage of clinicians that’s expected to grow in coming years.

“In health care, access is still a huge problem,” Ng says. “Rural Americans have a 40 percent higher risk of preventable hospitalization, and half of that is attributed to a lack of access to specialty care.”

With Ambience already helping health systems manage razor-thin margins by streamlining administrative tasks, the founders have a longer-term vision to help increase access to the best clinical information across the country.

“There’s a really exciting opportunity to make expertise at some of the major academic medical centers more democratized across the U.S.,” Ng says. “Right now, there’s just not enough specialists in the U.S. to support our rural populations. We hope to help scale the knowledge of the leading specialists in the country through an AI infrastructure layer, especially as these models become more clinically intelligent.”


An exotic-materials researcher with the soul of an explorer

Associate professor of physics Riccardo Comin never stops seeking uncharted territory.


Riccardo Comin says the best part of his job as a physics professor and exotic-materials researcher is when his students come into his office to tell him they have new, interesting data.

“It’s that moment of discovery, that moment of awe, of revelation of something that’s outside of anything you know,” says Comin, the Class of 1947 Career Development Associate Professor of Physics. “That’s what makes it all worthwhile.”

Intriguing data energizes Comin because it can potentially grant access to an unexplored world. His team has discovered materials with quantum and other exotic properties, which could find a range of applications, such as handling the world’s exploding quantities of data, more precise medical imaging, and vastly increased energy efficiency — to name just a few. For Comin, who has always been somewhat of an explorer, new discoveries satisfy a kind of intellectual wanderlust.

As a small child growing up in the city of Udine in northeast Italy, Comin loved geography and maps, even drawing his own of imaginary cities and countries. He traveled literally, too, touring Europe with his parents; his father was offered free train travel as a project manager on large projects for Italian railroads.

Comin also loved numbers from an early age, and by about eighth grade would go to the public library to delve into math textbooks about calculus and analytical geometry that were far beyond what he was being taught in school. Later, in high school, Comin enjoyed being challenged by a math and physics teacher who in class would ask him questions about extremely advanced concepts.

“My classmates were looking at me like I was an alien, but I had a lot of fun,” Comin says.

Unafraid to venture alone into more rarefied areas of study, Comin nonetheless sought community, and appreciated the rapport he had with his teacher.

“He gave me the kind of interaction I was looking for, because otherwise it would have been just me and my books,” Comin says. “He helped transform an isolated activity into a social one. He made me feel like I had a buddy.”

By the end of his undergraduate studies at the University of Trieste, Comin says he decided on experimental physics, to have “the opportunity to explore and observe physical phenomena.”

He visited a nearby research facility that houses the Elettra Synchrotron to look for a research position where he could work on his undergraduate thesis, and became interested in all of the materials science research being conducted there. Drawn to community as well as the research, he chose a group that was investigating how the atoms and molecules in a liquid can rearrange themselves to become a glass.

“This one group struck me. They seemed to really enjoy what they were doing, and they had fun outside of work and enjoyed the outdoors,” Comin says. “They seemed to be a nice group of people to be part of. I think I cared more about the social environment than the specific research topic.”

By the time Comin was finishing his master’s, also in Trieste, and wanted to get a PhD, his focus had turned to electrons inside a solid rather than the behavior of atoms and molecules. Having traveled “literally almost everywhere in Europe,” Comin says he wanted to experience a different research environment outside of Europe.

He told his academic advisor he wanted to go to North America and was connected with Andrea Damascelli, the Canada Research Chair in Electronic Structure of Quantum Materials at the University of British Columbia, who was working on high-temperature superconductors. Comin says he was fascinated by the behavior of the electrons in the materials Damascelli and his group were studying.

“It’s almost like a quantum choreography, particles that dance together” rather than moving in many different directions, Comin says.

Comin’s subsequent postdoctoral work at the University of Toronto, focusing on optoelectronic materials — which can interact with photons and electrical energy — ignited his passion for connecting a material’s properties to its functionality and bridging the gap between fundamental physics and real-world applications.

Since coming to MIT in 2016, Comin has continued to delight in the behavior of electrons. He and Joe Checkelsky, associate professor of physics, had a breakthrough with a new class of materials in which electrons, very atypically, are nearly stationary.

Such materials could be used to explore zero energy loss, such as from power lines, and new approaches to quantum computing.

“It’s a very peculiar state of matter,” says Comin. “Normally, electrons are just zapping around. If you put an electron in a crystalline environment, what that electron will want to do is hop around, explore its neighbors, and basically be everywhere at the same time.”

The more sedentary electrons occurred in materials where a structure of interlaced triangles and hexagons tended to trap the electrons on the hexagons and, because the electrons all have the same energy, they create what’s called an electronic flat band, referring to the pattern that is created when they are measured. Their existence was predicted theoretically, but they had not been observed.

Comin says he and his colleagues made educated guesses on where to find flat bands, but they were elusive. After three years of research, however, they had a breakthrough.

“We put a sample material in an experimental chamber, we aligned the sample to do the experiment and started the measurement and, literally, five to 10 minutes later, we saw this beautiful flat band on the screen,” Comin says. “It was so clear, like this thing was basically screaming, How could you not find me before?

“That started off a whole area of research that is growing and growing — and a new direction in our field.”

Comin’s later research into certain two-dimensional materials with the thickness of single atoms and an internal structural feature of chirality, or right-handedness or left-handedness similar to how a spiral has a twist in one direction or the other, has yielded another new realm to explore.

By controlling the chirality, “there are interesting prospects of realizing a whole new class of devices” that could store information in a way that’s more robust and much more energy-efficient than current methods, says Comin, who is affiliated with MIT’s Materials Research Laboratory. Such devices would be especially valuable as the amount of data available generally and technologies like artificial intelligence grow exponentially.

While investigating these previously unknown properties of certain materials, Comin is characteristically adventurous in his pursuit.

“I embrace the randomness that nature throws at you,” he says. “It appears random, but there could be something behind it, so we try variations, switch things around, see what nature serves you. Much of what we discover is due to luck — and the rest boils down to a mix of knowledge and intuition to recognize when we’re seeing something new, something that’s worth exploring.”


Q&A: How the Europa Clipper will set cameras on a distant icy moon

MIT Research Scientist Jason Soderblom describes how the NASA mission will study the geology and composition of the surface of Jupiter’s water-rich moon and assess its astrobiological potential.


With its latest space mission successfully launched, NASA is set to return for a close-up investigation of Jupiter’s moon Europa. Yesterday at 12:06 p.m. EDT, the Europa Clipper lifted off via SpaceX Falcon Heavy rocket on a mission that will take a close look at Europa’s icy surface. Five years from now, the spacecraft will visit the moon, which hosts a water ocean covered by a water-ice shell. The spacecraft’s mission is to learn more about the composition and geology of the moon’s surface and interior and to assess its astrobiological potential. Because of Jupiter’s intense radiation environment, Europa Clipper will conduct a series of flybys, with its closest approach bringing it within just 16 miles of Europa’s surface. 

MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS) Research Scientist Jason Soderblom is a co-investigator on two of the spacecraft’s instruments: the Europa Imaging System and the Mapping Imaging Spectrometer for Europa. Over the past nine years, he and his fellow team members have been building imaging and mapping instruments to study Europa’s surface in detail to gain a better understanding of previously seen geologic features, as well as the chemical composition of the materials that are present. Here, he describes the mission's primary plans and goals.

Q: What do we currently know about Europa’s surface?

A: We know from NASA Galileo mission data that the surface crust is relatively thin, but we don’t know how thin it is. One of the goals of the Europa Clipper mission is to measure the thickness of that ice shell. The surface is riddled with fractures that indicate tectonism is actively resurfacing the moon. Its crust is primarily composed of water ice, but there are also exposures of non-ice material along these fractures and ridges that we believe include material coming up from within Europa.

One of the things that makes investigating the materials on the surface more difficult is the environment. Jupiter is a significant source of radiation, and Europa is relatively close to Jupiter. That radiation modifies the materials on the surface; understanding that radiation damage is a key component to understanding the composition.

This is also what drives the clipper-style mission and gives the mission its name: we clip by Europa, collect data, and then spend the majority of our time outside of the radiation environment. That allows us time to download the data, analyze it, and make plans for the next flyby.

Q: Did that pose a significant challenge when it came to instrument design?

A: Yes, and this is one of the reasons that we're just now returning to do this mission. The concept of this mission came about around the time of the Galileo mission in the late 1990s, so it's been roughly 25 years since scientists first wanted to carry out this mission. A lot of that time has been figuring out how to deal with the radiation environment.

There's a lot of tricks that we've been developing over the years. The instruments are heavily shielded, and lots of modeling has gone into figuring exactly where to put that shielding. We've also developed very specific techniques to collect data. For example, by taking a whole bunch of short observations, we can look for the signature of this radiation noise, remove it from the little bits of data here and there, add the good data together, and end up with a low-radiation-noise observation.

Q: You're involved with the two different imaging and mapping instruments: the Europa Imaging System (EIS) and the Mapping Imaging Spectrometer for Europa (MISE). How are they different from each other?

A: The camera system [EIS] is primarily focused on understanding the physics and the geology that's driving processes on the surface, looking for: fractured zones; regions that we refer to as chaos terrain, where it looks like icebergs have been suspended in a slurry of water and have jumbled around and mixed and twisted; regions where we believe the surface is colliding and subduction is occurring, so one section of the surface is going beneath the other; and other regions that are spreading, so new surface is being created like our mid-ocean ridges on Earth.

The spectrometer’s [MISE] primary function is to constrain the composition of the surface. In particular, we're really interested in sections where we think liquid water might have come to the surface. Understanding what material is from within Europa and what material is being deposited from external sources is also important, and separating that is necessary to understand the composition of those coming from Europa and using that to learn about the composition of the subsurface ocean.

There is an intersection between those two, and that's my interest in the mission. We have color imaging with our imaging system that can provide some crude understanding of the composition, and there is a mapping component to our spectrometer that allows us to understand how the materials that we're detecting are physically distributed and correlate with the geology. So there's a way to examine the intersection of those two disciplines — to extrapolate the compositional information derived from the spectrometer to much higher resolutions using the camera, and to extrapolate the geological information that we learn from the camera to the compositional constraints from the spectrometer.

Q: How do those mission goals align with the research that you've been doing here at MIT?

A: One of the other major missions that I've been involved with was the Cassini mission, primarily working with the Visual and Infrared Spectrometer team to understand the geology and composition of Saturn's moon Titan. That instrument is very similar to the MISE instrument, both in function and in science objective, and so there's a very strong connection between that and the Europa Clipper mission. For another mission, for which I’m leading the camera team, is working to retrieve a sample of a comet, and my primary function on that mission is understanding the geology of the cometary surface.

Q: What are you most excited about learning from the Europa Clipper mission?

A: I'm most fascinated with some of these very unique geologic features that we see on the surface of Europa, understanding the composition of the material that is involved, and the processes that are driving those features. In particular, the chaos terrains and the fractures that we see on the surface.

Q: It's going to be a while before the spacecraft finally reaches Europa. What work needs to be done in the meantime?

A: A key component of this mission will be the laboratory work here on Earth, expanding our spectral libraries so that when we collect a spectrum of Europa's surface, we can compare that to laboratory measurements. We are also in the process of developing a number of models to allow us to, for example, understand how a material might process and change starting in the ocean and working its way up through fractures and eventually to the surface. Developing these models now is an important piece before we collect these data, then we can make corrections and get improved observations as the mission progresses. Making the best and most efficient use of the spacecraft resources requires an ability to reprogram and refine observations in real-time.


Model reveals why debunking election misinformation often doesn’t work

The new study also identifies factors that can make these efforts more successful.


When an election result is disputed, people who are skeptical about the outcome may be swayed by figures of authority who come down on one side or the other. Those figures can be independent monitors, political figures, or news organizations. However, these “debunking” efforts don’t always have the desired effect, and in some cases, they can lead people to cling more tightly to their original position.

Neuroscientists and political scientists at MIT and the University of California at Berkeley have now created a computational model that analyzes the factors that help to determine whether debunking efforts will persuade people to change their beliefs about the legitimacy of an election. Their findings suggest that while debunking fails much of the time, it can be successful under the right conditions.

For instance, the model showed that successful debunking is more likely if people are less certain of their original beliefs and if they believe the authority is unbiased or strongly motivated by a desire for accuracy. It also helps when an authority comes out in support of a result that goes against a bias they are perceived to hold: for example, Fox News declaring that Joseph R. Biden had won in Arizona in the 2020 U.S. presidential election.

“When people see an act of debunking, they treat it as a human action and understand it the way they understand human actions — that is, as something somebody did for their own reasons,” says Rebecca Saxe, the John W. Jarve Professor of Brain and Cognitive Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study. “We’ve used a very simple, general model of how people understand other people’s actions, and found that that’s all you need to describe this complex phenomenon.”

The findings could have implications as the United States prepares for the presidential election taking place on Nov. 5, as they help to reveal the conditions that would be most likely to result in people accepting the election outcome.

MIT graduate student Setayesh Radkani is the lead author of the paper, which appears today in a special election-themed issue of the journal PNAS Nexus. Marika Landau-Wells PhD ’18, a former MIT postdoc who is now an assistant professor of political science at the University of California at Berkeley, is also an author of the study.

Modeling motivation

In their work on election debunking, the MIT team took a novel approach, building on Saxe’s extensive work studying “theory of mind” — how people think about the thoughts and motivations of other people.

As part of her PhD thesis, Radkani has been developing a computational model of the cognitive processes that occur when people see others being punished by an authority. Not everyone interprets punitive actions the same way, depending on their previous beliefs about the action and the authority. Some may see the authority as acting legitimately to punish an act that was wrong, while others may see an authority overreaching to issue an unjust punishment.

Last year, after participating in an MIT workshop on the topic of polarization in societies, Saxe and Radkani had the idea to apply the model to how people react to an authority attempting to sway their political beliefs. They enlisted Landau-Wells, who received her PhD in political science before working as a postdoc in Saxe’s lab, to join their effort, and Landau suggested applying the model to debunking of beliefs regarding the legitimacy of an election result.

The computational model created by Radkani is based on Bayesian inference, which allows the model to continually update its predictions of people’s beliefs as they receive new information. This approach treats debunking as an action that a person undertakes for his or her own reasons. People who observe the authority’s statement then make their own interpretation of why the person said what they did. Based on that interpretation, people may or may not change their own beliefs about the election result.

Additionally, the model does not assume that any beliefs are necessarily incorrect or that any group of people is acting irrationally.

“The only assumption that we made is that there are two groups in the society that differ in their perspectives about a topic: One of them thinks that the election was stolen and the other group doesn’t,” Radkani says. “Other than that, these groups are similar. They share their beliefs about the authority — what the different motives of the authority are and how motivated the authority is by each of those motives.”

The researchers modeled more than 200 different scenarios in which an authority attempts to debunk a belief held by one group regarding the validity of an election outcome.

Each time they ran the model, the researchers altered the certainty levels of each group’s original beliefs, and they also varied the groups’ perceptions of the motivations of the authority. In some cases, groups believed the authority was motivated by promoting accuracy, and in others they did not. The researchers also altered the groups’ perceptions of whether the authority was biased toward a particular viewpoint, and how strongly the groups believed in those perceptions.

Building consensus

In each scenario, the researchers used the model to predict how each group would respond to a series of five statements made by an authority trying to convince them that the election had been legitimate. The researchers found that in most of the scenarios they looked at, beliefs remained polarized and in some cases became even further polarized. This polarization could also extend to new topics unrelated to the original context of the election, the researchers found.

However, under some circumstances, the debunking was successful, and beliefs converged on an accepted outcome. This was more likely to happen when people were initially more uncertain about their original beliefs.

“When people are very, very certain, they become hard to move. So, in essence, a lot of this authority debunking doesn’t matter,” Landau-Wells says. “However, there are a lot of people who are in this uncertain band. They have doubts, but they don’t have firm beliefs. One of the lessons from this paper is that we’re in a space where the model says you can affect people’s beliefs and move them towards true things.”

Another factor that can lead to belief convergence is if people believe that the authority is unbiased and highly motivated by accuracy. Even more persuasive is when an authority makes a claim that goes against their perceived bias — for instance, Republican governors stating that elections in their states had been fair even though the Democratic candidate won.

As the 2024 presidential election approaches, grassroots efforts have been made to train nonpartisan election observers who can vouch for whether an election was legitimate. These types of organizations may be well-positioned to help sway people who might have doubts about the election’s legitimacy, the researchers say.

“They’re trying to train to people to be independent, unbiased, and committed to the truth of the outcome more than anything else. Those are the types of entities that you want. We want them to succeed in being seen as independent. We want them to succeed as being seen as truthful, because in this space of uncertainty, those are the voices that can move people toward an accurate outcome,” Landau-Wells says.

The research was funded, in part, by the Patrick J. McGovern Foundation and the Guggenheim Foundation.


MIT team takes a major step toward fully 3D-printed active electronics

By fabricating semiconductor-free logic gates, which can be used to perform computation, researchers hope to streamline the manufacture of electronics.


Active electronics — components that can control electrical signals — usually contain semiconductor devices that receive, store, and process information. These components, which must be made in a clean room, require advanced fabrication technology that is not widely available outside a few specialized manufacturing centers.

During the Covid-19 pandemic, the lack of widespread semiconductor fabrication facilities was one cause of a worldwide electronics shortage, which drove up costs for consumers and had implications in everything from economic growth to national defense. The ability to 3D print an entire, active electronic device without the need for semiconductors could bring electronics fabrication to businesses, labs, and homes across the globe.

While this idea is still far off, MIT researchers have taken an important step in that direction by demonstrating fully 3D-printed resettable fuses, which are key components of active electronics that usually require semiconductors.

The researchers’ semiconductor-free devices, which they produced using standard 3D printing hardware and an inexpensive, biodegradable material, can perform the same switching functions as the semiconductor-based transistors used for processing operations in active electronics.

Although still far from achieving the performance of semiconductor transistors, the 3D-printed devices could be used for basic control operations like regulating the speed of an electric motor.

“This technology has real legs. While we cannot compete with silicon as a semiconductor, our idea is not to necessarily replace what is existing, but to push 3D printing technology into uncharted territory. In a nutshell, this is really about democratizing technology. This could allow anyone to create smart hardware far from traditional manufacturing centers,” says Luis Fernando Velásquez-García, a principal research scientist in MIT’s Microsystems Technology Laboratories (MTL) and senior author of a paper describing the devices, which appears in Virtual and Physical Prototyping.

He is joined on the paper by lead author Jorge Cañada, an electrical engineering and computer science graduate student.

An unexpected project

Semiconductors, including silicon, are materials with electrical properties that can be tailored by adding certain impurities. A silicon device can have conductive and insulating regions, depending on how it is engineered. These properties make silicon ideal for producing transistors, which are a basic building block of modern electronics.

However, the researchers didn’t set out to 3D-print semiconductor-free devices that could behave like silicon-based transistors.

This project grew out of another in which they were fabricating magnetic coils using extrusion printing, a process where the printer melts filament and squirts material through a nozzle, fabricating an object layer-by-layer.

They saw an interesting phenomenon in the material they were using, a polymer filament doped with copper nanoparticles.

If they passed a large amount of electric current into the material, it would exhibit a huge spike in resistance but would return to its original level shortly after the current flow stopped.

This property enables engineers to make transistors that can operate as switches, something that is typically only associated with silicon and other semiconductors. Transistors, which switch on and off to process binary data, are used to form logic gates which perform computation.

“We saw that this was something that could help take 3D printing hardware to the next level. It offers a clear way to provide some degree of ‘smart’ to an electronic device,” Velásquez-García says.

The researchers tried to replicate the same phenomenon with other 3D printing filaments, testing polymers doped with carbon, carbon nanotubes, and graphene. In the end, they could not find another printable material that could function as a resettable fuse.

They hypothesize that the copper particles in the material spread out when it is heated by the electric current, which causes a spike in resistance that comes back down when the material cools and the copper particles move closer together. They also think the polymer base of the material changes from crystalline to amorphous when heated, then returns to crystalline when cooled down — a phenomenon known as the polymeric positive temperature coefficient.

“For now, that is our best explanation, but that is not the full answer because that doesn’t explain why it only happened in this combination of materials. We need to do more research, but there is no doubt that this phenomenon is real,” he says.

3D-printing active electronics

The team leveraged the phenomenon to print switches in a single step that could be used to form semiconductor-free logic gates.

The devices are made from thin, 3D-printed traces of the copper-doped polymer. They contain intersecting conductive regions that enable the researchers to regulate the resistance by controlling the voltage fed into the switch.

While the devices did not perform as well as silicon-based transistors, they could be used for simpler control and processing functions, such as turning a motor on and off. Their experiments showed that, even after 4,000 cycles of switching, the devices showed no signs of deterioration.

But there are limits to how small the researchers can make the switches, based on the physics of extrusion printing and the properties of the material. They could print devices that were a few hundred microns, but transistors in state-of-the-art electronics are only few nanometers in diameter.

“The reality is that there are many engineering situations that don’t require the best chips. At the end of the day, all you care about is whether your device can do the task. This technology is able to satisfy a constraint like that,” he says.

However, unlike semiconductor fabrication, their technique uses a biodegradable material and the process uses less energy and produces less waste. The polymer filament could also be doped with other materials, like magnetic microparticles that could enable additional functionalities.

In the future, the researchers want to use this technology to print fully functional electronics. They are striving to fabricate a working magnetic motor using only extrusion 3D printing. They also want to finetune the process so they could build more complex circuits and see how far they can push the performance of these devices.

“This paper demonstrates that active electronic devices can be made using extruded polymeric conductive materials. This technology enables electronics to be built into 3D printed structures. An intriguing application is on-demand 3D printing of mechatronics on board spacecraft,” says Roger Howe, the William E. Ayer Professor of Engineering, Emeritus, at Stanford University, who was not involved with this work.

This work is funded, in part, by Empiriko Corporation.


MIT economists Daron Acemoglu and Simon Johnson share Nobel Prize

Along with James Robinson, the professors are honored for work on the relationship between economic growth and political institutions.


MIT economists Daron Acemoglu and Simon Johnson PhD ’89, whose work has illuminated the relationship between political systems and economic growth, have been named winners of the 2024 Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel. Political scientist James Robinson of the University of Chicago, with whom they have worked closely, also shares the award.

“Societies with a poor rule of law and institutions that exploit the population do not generate growth or change for the better,” the Swedish Royal Academy of Sciences stated in the Nobel citation. “The laureates’ research helps us understand why.”

The long-term research collaboration between Acemoglu, Johnson, and Robinson, which extends back for more than two decades, has empirically demonstrated that democracies, which hold to the rule of law and provide individual rights, have spurred greater economic activity over the last 500 years.

“I am just amazed and absolutely delighted,” Acemoglu told MIT News this morning, about receiving the Nobel Prize. Separately, Johnson told MIT News he was “surprised and delighted” by the announcement.

MIT President Sally Kornbluth congratulated both professors at an Institute press conference this morning, saying that Acemoglu and Johnson “reflect a kind of MIT ideal” in terms of the excellence and rigor of their work and their commitment to collaboration. Their research, Kornbluth added, represents “a very MIT interest in making a positive impact in the real world.”

In their work, Acemoglu, Johnson, and Robinson make a distinction between “inclusive” political governments, which extend political liberties and property rights as broadly as possible while enforcing laws and providing public infrastructure, with “extractive” political systems, where power is wielded by a small elite.

Overall, the scholars have found, inclusive governments experience the greatest growth in the long run. By contrast, countries with extractive governments either fail to generate broad-based growth or see their growth wither away after short bursts of economic expansion.

More specifically, because economic growth depends heavily on widespread technological innovation, such advances are only sustained when and where countries promote an array of individual rights, including property rights, giving more people the incentive to invent things. Elites may resist innovation, change, and growth to hold on to power, but without the rule of law and a stable set of rights, innovation and growth stall.

“Both political and economic inclusion matter, and they are synergistic,” Acemoglu said during the MIT press conference.

The scholarship of Acemoglu, Johnson, and Robinson has often been historically grounded, using the varying introduction of inclusive institutions, including the rule of law and property rights, to analyze their effects on growth.

As Acemoglu told MIT News, the scholars have used history “as a kind of lab, to understand how different institutional trajectories have different long-term effects on economic growth.”

For his part, Johnson said about the prize, “I hope it encourages people to think carefully about history. History matters.” That does not mean that the past is all-determinative, he added, but rather, it is essential to understand the crucial historical factors that shape the development of nations.

In a related line of research cited by the Swedish Royal Academy of Sciences, Acemoglu, Johnson, and Robinson have helped build models to account for political changes in many countries, analyzing the factors that shape historical transitions of government.

Acemoglu is an Institute Professor at MIT. He has also made notable contributions to labor economics by examining the relationship between skills and wages, and the effects of automation on employment and growth. Additionally, he has published influential papers on the characteristics of industrial networks and their large-scale implications for economies.

A native of Turkey, Acemoglu received his BA in 1989 from the University of York, in England. He earned his master’s degree in 1990 and his PhD in 1992, both from the London School of Economics. He joined the MIT faculty in 1993 and has remained at the Institute ever since. Currently a professor in MIT’s Department of Economics, an affiliate at the MIT Sloan School of Management, and a core member of the Institute for Data, Systems, and Society, Acemoglu has authored or co-authored over 120 peer-reviewed papers and published four books. He has also advised over 60 PhD students at MIT.

“MIT has been a wonderful environment for me,” Acemoglu told MIT News. “It's an intellectually rich place, and an intellectually honest place. I couldn't ask for a better institution.”

Johnson is the Ronald A. Kurtz Professor of Entrepreneurship at MIT Sloan. He has also written extensively about a broad range of additional topics, including development issues, the finance sector and regulation, fiscal policy, and the ways technology can either enhance or restrict broad prosperity.

A native of England, Johnson received his BA in economics and politics from Oxford University, an MA in economics from the University of Manchester, and his PhD in economics from MIT in 1989. From 2007 to 2008, Johnson was chief economist of the International Monetary Fund.

“I think of MIT as my intellectual home,” Johnson told MIT News. “I am immensely grateful to the Institute, which has a special and creative atmosphere of rigorous problem-solving.”

Acemoglu and Robinson first published papers on the topic in 2000. The trio of Acemoglu, Johnson, and Robinson published their first joint study in 2001, an influential paper in the American Economic Review detailing their empirical findings. Acemoglu and Robinson published their first co-authored book on the subject, “Economic Origins of Dictatorship and Democracy,” in 2006.

Acemoglu and Robinson are co-authors of the prominent book “Why Nations Fail,” from 2012, which also synthesized much of the trio’s research about political institutions and growth.

Acemoglu and Robinson’s subsequent book “The Narrow Corridor,” published in 2019, examined the historical development of rights and liberties in nation-states. They make the case that political liberty does not have a universal template, but stems from social struggle. As Acemoglu said in 2019, it comes from the “messy process of society mobilizing, people defending their own liberties, and actively setting constraints on how rules and behaviors are imposed on them.”

Acemoglu and Johnson are co-authors of the 2023 book “Power and Progress: Our 1,000-Year Struggle over Technology and Prosperity,” in which they examine artificial intelligence in light of other historical battles for the economic benefits of technological innovation.

Johnson is also co-author of “13 Bankers” (2010), with James Kwak, an examination of U.S. regulation of the finance sector, and “Jump-Starting America” (2021), co-authored with MIT economist Jonathan Gruber, a call for more investment in scientific research and innovation in the U.S.

Gruber, as head of the MIT Department of Economics, praised both scholars for their accomplishments.

“Daron Acemoglu is the economists’ economist,” Gruber said. “Daron is a throwback as an expert across a broad swath of fields, mastering topics from political economy to macroeconomics to labor economics — and he could have won Nobels in any of them. Yet perhaps Daron’s most lasting contribution is his essential work on how institutions determine economic growth. This work fundamentally changed the field of political economy and will be an enduring legacy that forever shapes our thinking about why nations succeed — and fail. At MIT, we recognize Daron not just as an epic scholar but as an epic colleague. Despite being an Institute Professor who is freed from departmental responsibilities, he teaches many courses every year and advises a huge share of our graduate student body.”

About Johnson, Gruber said: “Simon Johnson is an amazing economist, a terrific co-author, and a wonderful person. No one I know is better at translating the esoteric insights of our field into the type of concise explanations that bring economics to the attention of the public and policymakers. Simon doesn’t just do the fundamental research that changes how the profession thinks about essential issues — he speaks to the hearts and minds of those who need to hear that message.”

Agustin Rayo, dean of MIT’s School of Humanities, Arts, and Social Sciences, home to the Department of Economics, heralded today’s Nobel Prize as well.

“This award is deeply deserved,” Rayo said. “Daron is the sort of economist who shifts the way you see the world. He is an extraordinary example of the transformative work that is generated by MIT's Department of Economics.”

“All of us at MIT Sloan are very proud of Simon Johnson and Daron Acemoglu’s accomplishments,” said Georgia Perakis, the interim John C. Head III Dean of MIT Sloan. “Their work with Professor Robinson is important in understanding prosperity in societies and provides valuable lessons for us all during this time in the world. Their scholarship is a clear example of work that has meaningful impact. I share my heartiest congratulations with both Simon and Daron on this incredible honor.”

Previously, eight people have won the award while serving on the MIT faculty: Paul Samuelson (1970), Franco Modigliani (1985), Robert Solow (1987), Peter Diamond (2010), Bengt Holmström (2016), Abhijit Banerjee and Esther Duflo (2019), and Josh Angrist (2021). Through 2022, 13 MIT alumni have won the Nobel Prize in economics; eight former faculty have also won the award.


MIT releases financials and endowment figures for 2024

The Institute’s pooled investments returned 8.9 percent last year; endowment stands at $24.6 billion.


The Massachusetts Institute of Technology Investment Management Company (MITIMCo) announced today that MIT’s unitized pool of endowment and other MIT funds generated an investment return of 8.9 percent during the fiscal year ending June 30, 2024, as measured using valuations received within one month of fiscal year end. At the end of the fiscal year, MIT’s endowment funds totaled $24.6 billion, excluding pledges. Over the 10 years ending June 30, 2024, MIT generated an annualized return of 10.5 percent.

MIT’s endowment is intended to support current and future generations of MIT scholars with the resources needed to advance knowledge, research, and innovation. As such, endowment funds are used for Institute activities including education, research, campus renewal, faculty work, and student financial aid.

The Institute’s need-blind undergraduate admissions policy ensures that an MIT education is accessible to all qualified candidates regardless of financial resources. MIT works closely with all families who qualify for financial aid to develop an individual affordability plan tailored to their financial circumstances. In 2023-24, the average need-based MIT scholarship was $59,510. Fifty-eight percent of MIT undergraduates received need-based financial aid, and 39 percent of MIT undergraduate students received scholarship funding from MIT and other sources sufficient to cover the total cost of tuition.

Effective in fiscal 2023, MIT enhanced undergraduate financial aid, ensuring that all families with incomes below $140,000 and typical assets have tuition fully covered by scholarships. MIT further enhanced undergraduate financial aid effective in fiscal 2025, and families with incomes below $75,000 and typical assets have no expectation of parental contribution. Eighty-seven percent of seniors who graduated in academic year 2024 graduated with no debt.

MITIMCo is a unit of MIT, created to manage and oversee the investment of the Institute’s endowment, retirement, and operating funds.

MIT’s Report of the Treasurer for fiscal year 2024 was made available publicly today.


Tiny magnetic discs offer remote brain stimulation without transgenes

The devices could be a useful tool for biomedical research, and possible clinical use in the future.


Novel magnetic nanodiscs could provide a much less invasive way of stimulating parts of the brain, paving the way for stimulation therapies without implants or genetic modification, MIT researchers report.

The scientists envision that the tiny discs, which are about 250 nanometers across (about 1/500 the width of a human hair), would be injected directly into the desired location in the brain. From there, they could be activated at any time simply by applying a magnetic field outside the body. The new particles could quickly find applications in biomedical research, and eventually, after sufficient testing, might be applied to clinical uses.

The development of these nanoparticles is described in the journal Nature Nanotechnology, in a paper by Polina Anikeeva, a professor in MIT’s departments of Materials Science and Engineering and Brain and Cognitive Sciences, graduate student Ye Ji Kim, and 17 others at MIT and in Germany.

Deep brain stimulation (DBS) is a common clinical procedure that uses electrodes implanted in the target brain regions to treat symptoms of neurological and psychiatric conditions such as Parkinson’s disease and obsessive-compulsive disorder. Despite its efficacy, the surgical difficulty and clinical complications associated with DBS limit the number of cases where such an invasive procedure is warranted. The new nanodiscs could provide a much more benign way of achieving the same results.

Over the past decade other implant-free methods of producing brain stimulation have been developed. However, these approaches were often limited by their spatial resolution or ability to target deep regions. For the past decade, Anikeeva’s Bioelectronics group as well as others in the field used magnetic nanomaterials to transduce remote magnetic signals into brain stimulation. However, these magnetic methods relied on genetic modifications and can’t be used in humans.

Since all nerve cells are sensitive to electrical signals, Kim, a graduate student in Anikeeva’s group, hypothesized that a magnetoelectric nanomaterial that can efficiently convert magnetization into electrical potential could offer a path toward remote magnetic brain stimulation. Creating a nanoscale magnetoelectric material was, however, a formidable challenge.

Kim synthesized novel magnetoelectric nanodiscs and collaborated with Noah Kent, a postdoc in Anikeeva’s lab with a background in physics who is a second author of the study, to understand the properties of these particles.

The structure of the new nanodiscs consists of a two-layer magnetic core and a piezoelectric shell. The magnetic core is magnetostrictive, which means it changes shape when magnetized. This deformation then induces strain in the piezoelectric shell which produces a varying electrical polarization. Through the combination of the two effects, these composite particles can deliver electrical pulses to neurons when exposed to magnetic fields.

One key to the discs’ effectiveness is their disc shape. Previous attempts to use magnetic nanoparticles had used spherical particles, but the magnetoelectric effect was very weak, says Kim. This anisotropy enhances magnetostriction by over a 1000-fold, adds Kent.

The team first added their nanodiscs to cultured neurons, which allowed then to activate these cells on demand with short pulses of magnetic field. This stimulation did not require any genetic modification.

They then injected small droplets of magnetoelectric nanodiscs solution into specific regions of the brains of mice. Then, simply turning on a relatively weak electromagnet nearby triggered the particles to release a tiny jolt of electricity in that brain region. The stimulation could be switched on and off remotely by the switching of the electromagnet. That electrical stimulation “had an impact on neuron activity and on behavior,” Kim says.

The team found that the magnetoelectric nanodiscs could stimulate a deep brain region, the ventral tegmental area, that is associated with feelings of reward.

The team also stimulated another brain area, the subthalamic nucleus, associated with motor control. “This is the region where electrodes typically get implanted to manage Parkinson’s disease,” Kim explains. The researchers were able to successfully demonstrate the modulation of motor control through the particles. Specifically, by injecting nanodiscs only in one hemisphere, the researchers could induce rotations in healthy mice by applying magnetic field.

The nanodiscs could trigger the neuronal activity comparable with conventional implanted electrodes delivering mild electrical stimulation. The authors achieved subsecond temporal precision for neural stimulation with their method yet observed significantly reduced foreign body responses as compared to the electrodes, potentially allowing for even safer deep brain stimulation.

The multilayered chemical composition and physical shape and size of the new multilayered nanodiscs is what made precise stimulation possible.

While the researchers successfully increased the magnetostrictive effect, the second part of the process, converting the magnetic effect into an electrical output, still needs more work, Anikeeva says. While the magnetic response was a thousand times greater, the conversion to an electric impulse was only four times greater than with conventional spherical particles.

“This massive enhancement of a thousand times didn’t completely translate into the magnetoelectric enhancement,” says Kim. “That’s where a lot of the future work will be focused, on making sure that the thousand times amplification in magnetostriction can be converted into a thousand times amplification in the magnetoelectric coupling.”

What the team found, in terms of the way the particles’ shapes affects their magnetostriction, was quite unexpected. “It’s kind of a new thing that just appeared when we tried to figure out why these particles worked so well,” says Kent.

Anikeeva adds: “Yes, it’s a record-breaking particle, but it’s not as record-breaking as it could be.” That remains a topic for further work, but the team has ideas about how to make further progress.

While these nanodiscs could in principle already be applied to basic research using animal models, to translate them to clinical use in humans would require several more steps, including large-scale safety studies, “which is something academic researchers are not necessarily most well-positioned to do,” Anikeeva says. “When we find that these particles are really useful in a particular clinical context, then we imagine that there will be a pathway for them to undergo more rigorous large animal safety studies.”

The team included researchers affiliated with MIT’s departments of Materials Science and Engineering, Electrical Engineering and Computer Science, Chemistry, and Brain and Cognitive Sciences; the Research Laboratory of Electronics; the McGovern Institute for Brain Research; and the Koch Institute for Integrative Cancer Research; and from the Friedrich-Alexander University of Erlangen, Germany. The work was supported, in part, by the National Institutes of Health, the National Center for Complementary and Integrative Health, the National Institute for Neurological Disorders and Stroke, the McGovern Institute for Brain Research, and the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics in Neuroscience.


A new method makes high-resolution imaging more accessible

Labs that can’t afford expensive super-resolution microscopes could use a new expansion technique to image nanoscale structures inside cells.


A classical way to image nanoscale structures in cells is with high-powered, expensive super-resolution microscopes. As an alternative, MIT researchers have developed a way to expand tissue before imaging it — a technique that allows them to achieve nanoscale resolution with a conventional light microscope.

In the newest version of this technique, the researchers have made it possible to expand tissue 20-fold in a single step. This simple, inexpensive method could pave the way for nearly any biology lab to perform nanoscale imaging.

“This democratizes imaging,” says Laura Kiessling, the Novartis Professor of Chemistry at MIT and a member of the Broad Institute of MIT and Harvard and MIT’s Koch Institute for Integrative Cancer Research. “Without this method, if you want to see things with a high resolution, you have to use very expensive microscopes. What this new technique allows you to do is see things that you couldn’t normally see with standard microscopes. It drives down the cost of imaging because you can see nanoscale things without the need for a specialized facility.”

At the resolution achieved by this technique, which is around 20 nanometers, scientists can see organelles inside cells, as well as clusters of proteins.

“Twenty-fold expansion gets you into the realm that biological molecules operate in. The building blocks of life are nanoscale things: biomolecules, genes, and gene products,” says Edward Boyden, the Y. Eva Tan Professor in Neurotechnology at MIT; a professor of biological engineering, media arts and sciences, and brain and cognitive sciences; a Howard Hughes Medical Institute investigator; and a member of MIT’s McGovern Institute for Brain Research and Koch Institute for Integrative Cancer Research.

Boyden and Kiessling are the senior authors of the new study, which appears today in Nature Methods. MIT graduate student Shiwei Wang and Tay Won Shin PhD ’23 are the lead authors of the paper.

A single expansion

Boyden’s lab invented expansion microscopy in 2015. The technique requires embedding tissue into an absorbent polymer and breaking apart the proteins that normally hold tissue together. When water is added, the gel swells and pulls biomolecules apart from each other.

The original version of this technique, which expanded tissue about fourfold, allowed researchers to obtain images with a resolution of around 70 nanometers. In 2017, Boyden’s lab modified the process to include a second expansion step, achieving an overall 20-fold expansion. This enables even higher resolution, but the process is more complicated.

“We’ve developed several 20-fold expansion technologies in the past, but they require multiple expansion steps,” Boyden says. “If you could do that amount of expansion in a single step, that could simplify things quite a bit.”

With 20-fold expansion, researchers can get down to a resolution of about 20 nanometers, using a conventional light microscope. This allows them see cell structures like microtubules and mitochondria, as well as clusters of proteins.

In the new study, the researchers set out to perform 20-fold expansion with only a single step. This meant that they had to find a gel that was both extremely absorbent and mechanically stable, so that it wouldn’t fall apart when expanded 20-fold.

To achieve that, they used a gel assembled from N,N-dimethylacrylamide (DMAA) and sodium acrylate. Unlike previous expansion gels that rely on adding another molecule to form crosslinks between the polymer strands, this gel forms crosslinks spontaneously and exhibits strong mechanical properties. Such gel components previously had been used in expansion microscopy protocols, but the resulting gels could expand only about tenfold. The MIT team optimized the gel and the polymerization process to make the gel more robust, and to allow for 20-fold expansion.

To further stabilize the gel and enhance its reproducibility, the researchers removed oxygen from the polymer solution prior to gelation, which prevents side reactions that interfere with crosslinking. This step requires running nitrogen gas through the polymer solution, which replaces most of the oxygen in the system.

Once the gel is formed, select bonds in the proteins that hold the tissue together are broken and water is added to make the gel expand. After the expansion is performed, target proteins in tissue can be labeled and imaged.

“This approach may require more sample preparation compared to other super-resolution techniques, but it’s much simpler when it comes to the actual imaging process, especially for 3D imaging,” Shin says. “We document the step-by-step protocol in the manuscript so that readers can go through it easily.”

Imaging tiny structures

Using this technique, the researchers were able to image many tiny structures within brain cells, including structures called synaptic nanocolumns. These are clusters of proteins that are arranged in a specific way at neuronal synapses, allowing neurons to communicate with each other via secretion of neurotransmitters such as dopamine.

In studies of cancer cells, the researchers also imaged microtubules — hollow tubes that help give cells their structure and play important roles in cell division. They were also able to see mitochondria (organelles that generate energy) and even the organization of individual nuclear pore complexes (clusters of proteins that control access to the cell nucleus).

Wang is now using this technique to image carbohydrates known as glycans, which are found on cell surfaces and help control cells’ interactions with their environment. This method could also be used to image tumor cells, allowing scientists to glimpse how proteins are organized within those cells, much more easily than has previously been possible.

The researchers envision that any biology lab should be able to use this technique at a low cost since it relies on standard, off-the-shelf chemicals and common equipment such confocal microscopes and glove bags, which most labs already have or can easily access.

“Our hope is that with this new technology, any conventional biology lab can use this protocol with their existing microscopes, allowing them to approach resolution that can only be achieved with very specialized and costly state-of-the-art microscopes,” Wang says.

The research was funded, in part, by the U.S. National Institutes of Health, an MIT Presidential Graduate Fellowship, U.S. National Science Foundation Graduate Research Fellowship grants, Open Philanthropy, Good Ventures, the Howard Hughes Medical Institute, Lisa Yang, Ashar Aziz, and the European Research Council.


The way sensory prediction changes under anesthesia tells us how conscious cognition works

A new study adds evidence that consciousness requires communication between sensory and cognitive regions of the brain’s cortex.


Our brains constantly work to make predictions about what’s going on around us to ensure that we can attend to and consider the unexpected, for instance. A new study examines how this works during consciousness and also breaks down under general anesthesia. The results add evidence to the idea that conscious thought requires synchronized communication — mediated by brain rhythms in specific frequency bands — between basic sensory and higher-order cognitive regions of the brain.

Previously, members of the research team in The Picower Institute for Learning and Memory at MIT and at Vanderbilt University had described how brain rhythms enable the brain to remain prepared to attend to surprises. Cognition-oriented brain regions (generally at the front of the brain) use relatively low-frequency alpha and beta rhythms to suppress processing by sensory regions (generally toward the back of the brain) of stimuli that have become familiar and mundane in the environment (e.g., your co-worker’s music). When sensory regions detect a surprise (e.g., the office fire alarm), they use faster-frequency gamma rhythms to tell the higher regions about it, and the higher regions process that at gamma frequencies to decide what to do (e.g., exit the building).

The new results, published Oct. 7 in the Proceedings of the National Academy of Sciences, show that when animals were under propofol-induced general anesthesia, a sensory region retained the capacity to detect simple surprises but communication with a higher cognitive region toward the front of the brain was lost, making that region unable to engage in its “top-down” regulation of the activity of the sensory region and keeping it oblivious to simple and more complex surprises alike.

What we've got here is failure to communicate

“What we are doing here speaks to the nature of consciousness,” says co-senior author Earl K. Miller, Picower Professor in The Picower Institute for Learning and Memory and MIT’s Department of Brain and Cognitive Sciences. “Propofol general anesthesia deactivates the top-down processes that that underlie cognition. It essentially disconnects communication between the front and back halves of the brain.”

Co-senior author Andre Bastos, an assistant professor in the psychology department at Vanderbilt and a former member of Miller’s MIT lab, adds that the study results highlight the key role of frontal areas in consciousness.

“These results are particularly important given the newfound scientific interest in the mechanisms of consciousness, and how consciousness relates to the ability of the brain to form predictions,” Bastos says.

The brain’s ability to predict is dramatically altered during anesthesia. It was interesting that the front of the brain, areas associated with cognition, were more strongly diminished in their predictive abilities than sensory areas. This suggests that prefrontal areas help to spark an “ignition” event that allows sensory information to become conscious. Sensory cortex activation by itself does not lead to conscious perception. These observations help us narrow down possible models for the mechanisms of consciousness.

Yihan Sophy Xiong, a graduate student in Bastos’ lab who led the study, says the anesthetic reduces the times in which inter-regional communication within the cortex can occur.

“In the awake brain, brain waves give short windows of opportunity for neurons to fire optimally — the ‘refresh rate’ of the brain, so to speak,” Xiong says. “This refresh rate helps organize different brain areas to communicate effectively. Anesthesia both slows down the refresh rate, which narrows these time windows for brain areas to talk to each other and makes the refresh rate less effective, so that neurons become more disorganized about when they can fire. When the refresh rate no longer works as intended, our ability to make predictions is weakened.”

Learning from oddballs

To conduct the research, the neuroscientists measured the electrical signals, “or spiking,” of hundreds of individual neurons and the coordinated rhythms of their aggregated activity (at alpha/beta and gamma frequencies), in two areas on the surface, or cortex, of the brain of two animals as they listened to sequences of tones. Sometimes the sequences would all be the same note (e.g., AAAAA). Sometimes there’d be a simple surprise that the researchers called a “local oddball” (e.g., AAAAB). But sometimes the surprise would be more complicated, or a “global oddball.” For example, after seeing a series of AAAABs, there’d all of a sudden be AAAAA, which violates the global but not the local pattern.

Prior work has suggested that a sensory region (in this case the temporoparietal area, or Tpt) can spot local oddballs on its own, Miller says. Detecting the more complicated global oddball requires the participation of a higher order region (in this case the frontal eye fields, or FEF).

The animals heard the tone sequences both while awake and while under propofol anesthesia. There were no surprises about the waking state. The researchers reaffirmed that top-down alpha/beta rhythms from FEF carried predictions to the Tpt and that Tpt would increase gamma rhythms when an oddball came up, causing FEF (and the prefrontal cortex) to respond with upticks of gamma activity as well.

But by several measures and analyses, the scientists could see these dynamics break down after the animals lost consciousness.

Under propofol, for instance, spiking activity declined overall but when a local oddball came along, Tpt spiking still increased notably but now spiking in FEF didn’t follow suit as it does during wakefulness.

Meanwhile, when a global oddball was presented during wakefulness, the researchers could use software to “decode” representation of that among neurons in FEF and the prefrontal cortex (another cognition-oriented region). They could also decode local oddballs in the Tpt. But under anesthesia the decoder could no longer reliably detect representation of local or global oddballs in FEF or the prefrontal cortex.

Moreover, when they compared rhythms in the regions amid wakeful versus unconscious states they found stark differences. When the animals were awake, oddballs increased gamma activity in both Tpt and FEF and alpha/beta rhythms decreased. Regular, non-oddball stimulation increased alpha/beta rhythms. But when the animals lost consciousness the increase in gamma rhythms from a local oddball was even greater in Tpt than when the animal was awake.

“Under propofol-mediated loss of consciousness, the inhibitory function of alpha/beta became diminished and/or eliminated, leading to disinhibition of oddballs in sensory cortex,” the authors wrote.

Other analyses of inter-region connectivity and synchrony revealed that the regions lost the ability to communicate during anesthesia.

In all, the study’s evidence suggests that conscious thought requires coordination across the cortex, from front to back, the researchers wrote.

“Our results therefore suggest an important role for prefrontal cortex activation, in addition to sensory cortex activation, for conscious perception,” the researchers wrote.

In addition to Xiong, Miller, and Bastos, the paper’s other authors are Jacob Donoghue, Mikael Lundqvist, Meredith Mahnke, Alex Major, and Emery N. Brown.

The National Institutes of Health, The JPB Foundation, and The Picower Institute for Learning and Memory funded the study.


Mixing joy and resolve, event celebrates women in science and addresses persistent inequalities

The Kuggie Vallee Distinguished Lectures and Workshops presented inspiring examples of success, even as the event evoked frank discussions of the barriers that still hinder many women in science.


For two days at The Picower Institute for Learning and Memory at MIT, participants in the Kuggie Vallee Distinguished Lectures and Workshops celebrated the success of women in science and shared strategies to persist through, or better yet dissipate, the stiff headwinds women still face in the field.

“Everyone is here to celebrate and to inspire and advance the accomplishments of all women in science,” said host Li-Huei Tsai, Picower Professor in the Department of Brain and Cognitive Sciences and director of the Picower Institute, as she welcomed an audience that included scores of students, postdocs, and other research trainees. “It is a great feeling to have the opportunity to showcase examples of our successes and to help lift up the next generation.”

Tsai earned the honor of hosting the event after she was named a Vallee Visiting Professor in 2022 by the Vallee Foundation. Foundation president Peter Howley, a professor of pathological anatomy at Harvard University, said the global series of lectureships and workshops were created to honor Kuggie Vallee, a former Lesley College professor who worked to advance the careers of women.

During the program Sept. 24-25, speakers and audience members alike made it clear that helping women succeed requires both recognizing their achievements and resolving to change social structures in which they face marginalization.

Inspiring achievements

Lectures on the first day featured two brain scientists who have each led acclaimed discoveries that have been transforming their fields.

Michelle Monje, a pediatric neuro-oncologist at Stanford University whose recognitions include a MacArthur Fellowship, described her lab’s studies of brain cancers in children, which emerge at specific times in development as young brains adapt to their world by wiring up new circuits and insulating neurons with a fatty sheathing called myelin. Monje has discovered that when the precursors to myelinating cells, called oligodendrocyte precursor cells, harbor cancerous mutations, the tumors that arise — called gliomas — can hijack those cellular and molecular mechanisms. To promote their own growth, gliomas tap directly into the electrical activity of neural circuits by forging functional neuron-to-cancer connections, akin to the “synapse” junctions healthy neurons make with each other. Years of her lab’s studies, often led by female trainees, have not only revealed this insidious behavior (and linked aberrant myelination to many other diseases as well), but also revealed specific molecular factors involved. Those findings, Monje said, present completely novel potential avenues for therapeutic intervention.

“This cancer is an electrically active tissue and that is not how we have been approaching understanding it,” she said.

Erin Schuman, who directs the Max Planck Institute for Brain Research in Frankfurt, Germany, and has won honors including the Brain Prize, described her groundbreaking discoveries related to how neurons form and edit synapses along the very long branches — axons and dendrites — that give the cells their exotic shapes. Synapses form very far from the cell body where scientists had long thought all proteins, including those needed for synapse structure and activity, must be made. In the mid-1990s, Schuman showed that the protein-making process can occur at the synapse and that neurons stage the needed infrastructure — mRNA and ribosomes — near those sites. Her lab has continued to develop innovative tools to build on that insight, cataloging the stunning array of thousands of mRNAs involved, including about 800 that are primarily translated at the synapse, studying the diversity of synapses that arise from that collection, and imaging individual ribosomes such that her lab can detect when they are actively making proteins in synaptic neighborhoods.

Persistent headwinds

While the first day’s lectures showcased examples of women’s success, the second day’s workshops turned the spotlight on the social and systemic hindrances that continue to make such achievements an uphill climb. Speakers and audience members engaged in frank dialogues aimed at calling out those barriers, overcoming them, and dismantling them.

Susan Silbey, the Leon and Anne Goldberg Professor of Humanities, Sociology and Anthropology at MIT and professor of behavioral and policy sciences in the MIT Sloan School of Management, told the group that as bad as sexual harassment and assault in the workplace are, the more pervasive, damaging, and persistent headwinds for women across a variety of professions are “deeply sedimented cultural habits” that marginalize their expertise and contributions in workplaces, rendering them invisible to male counterparts, even when they are in powerful positions. High-ranking women in Silicon Valley who answered the “Elephant in the Valley” survey, for instance, reported high rates of many demeaning comments and demeanor, as well as exclusion from social circles. Even U.S. Supreme Court justices are not immune, she noted, citing research showing that for decades female justices have been interrupted with disproportionate frequency during oral arguments at the court. Silbey’s research has shown that young women entering the engineering workforce often become discouraged by a system that appears meritocratic, but in which they are often excluded from opportunities to demonstrate or be credited for that merit and are paid significantly less.

“Women’s occupational inequality is a consequence of being ignored, having contributions overlooked or appropriated, of being assigned to lower-status roles, while men are pushed ahead, honored and celebrated, often on the basis of women’s work,” Silbey said.

Often relatively small in numbers, women in such workplaces become tokens — visible as different, but still treated as outsiders, Silbey said. Women tend to internalize this status, becoming very cautious about their work while some men surge ahead in more cavalier fashion. Silbey and speakers who followed illustrated the effect this can have on women’s careers in science. Kara McKinley, an assistant professor of stem cell and regenerative biology at Harvard, noted that while the scientific career “pipeline” in some areas of science is full of female graduate students and postdocs, only about 20 percent of natural sciences faculty positions are held by women. Strikingly, women are already significantly depleted in the applicant pools for assistant professor positions, she said. Those who do apply tend to wait until they are more qualified than the men they are competing against. 

McKinley and Silbey each noted that women scientists submit fewer papers to prestigious journals, with Silbey explaining that it’s often because women are more likely to worry that their studies need to tie up every loose end. Yet, said Stacie Weninger, a venture capitalist and president of the F-Prime Biomedical Research Initiative and a former editor at Cell Press, women were also less likely than men to rebut rejections from journal editors, thereby accepting the rejection even though rebuttals sometimes work.

Several speakers, including Weninger and Silbey, said pedagogy must change to help women overcome a social tendency to couch their assertions in caveats when many men speak with confidence and are therefore perceived as more knowledgeable.

At lunch, trainees sat in small groups with the speakers. They shared sometimes harrowing personal stories of gender-related difficulties in their young careers and sought advice on how to persist and remain resilient. Schuman advised the trainees to report mistreatment, even if they aren’t confident that university officials will be able to effect change, to at least make sure patterns of mistreatment get on the record. Reflecting on discouraging comments she experienced early in her career, Monje advised students to build up and maintain an inner voice of confidence and draw upon it when criticism is unfair.

“It feels terrible in the moment, but cream rises,” Monje said. “Believe in yourself. It will be OK in the end.”

Lifting each other up

Speakers at the conference shared many ideas to help overcome inequalities. McKinley described a program she launched in 2020 to ensure that a diversity of well-qualified women and non-binary postdocs are recruited for, and apply for, life sciences faculty jobs: the Leading Edge Symposium. The program identifies and names fellows — 200 so far — and provides career mentoring advice, a supportive community, and a platform to ensure they are visible to recruiters. Since the program began, 99 of the fellows have gone on to accept faculty positions at various institutions.

In a talk tracing the arc of her career, Weninger, who trained as a neuroscientist at Harvard, said she left bench work for a job as an editor because she wanted to enjoy the breadth of science, but also noted that her postdoc salary didn’t even cover the cost of child care. She left Cell Press in 2005 to help lead a task force on women in science that Harvard formed in the wake of comments by then-president Lawrence Summers widely understood as suggesting that women lacked “natural ability” in science and engineering. Working feverishly for months, the task force recommended steps to increase the number of senior women in science, including providing financial support for researchers who were also caregivers at home so they’d have the money to hire a technician. That extra set of hands would afford them the flexibility to keep research running even as they also attended to their families. Notably, Monje said she does this for the postdocs in her lab.

A graduate student asked Silbey at the end of her talk how to change a culture in which traditionally male-oriented norms marginalize women. Silbey said it starts with calling out those norms and recognizing that they are the issue, rather than increasing women’s representation in, or asking them to adapt to, existing systems.

“To make change, it requires that you do recognize the differences of the experiences and not try to make women exactly like men, or continue the past practices and think, ‘Oh, we just have to add women into it’,” she said.

Silbey also praised the Kuggie Vallee event at MIT for assembling a new community around these issues. Women in science need more social networks where they can exchange information and resources, she said.

“This is where an organ, an event like this, is an example of making just that kind of change: women making new networks for women,” she said.


New 3D printing technique creates unique objects quickly and with less waste

By using a 3D printer like an iron, researchers can precisely control the color, shade, and texture of fabricated objects, using only one material.


Multimaterial 3D printing enables makers to fabricate customized devices with multiple colors and varied textures. But the process can be time-consuming and wasteful because existing 3D printers must switch between multiple nozzles, often discarding one material before they can start depositing another.

Researchers from MIT and Delft University of Technology have now introduced a more efficient, less wasteful, and higher-precision technique that leverages heat-responsive materials to print objects that have multiple colors, shades, and textures in one step.

Their method, called speed-modulated ironing, utilizes a dual-nozzle 3D printer. The first nozzle deposits a heat-responsive filament and the second nozzle passes over the printed material to activate certain responses, such as changes in opacity or coarseness, using heat.

By controlling the speed of the second nozzle, the researchers can heat the material to specific temperatures, finely tuning the color, shade, and roughness of the heat-responsive filaments. Importantly, this method does not require any hardware modifications.

The researchers developed a model that predicts the amount of heat the “ironing” nozzle will transfer to the material based on its speed. They used this model as the foundation for a user interface that automatically generates printing instructions which achieve color, shade, and texture specifications.

One could use speed-modulated ironing to create artistic effects by varying the color on a printed object. The technique could also produce textured handles that would be easier to grasp for individuals with weakness in their hands.

“Today, we have desktop printers that use a smart combination of a few inks to generate a range of shades and textures. We want to be able to do the same thing with a 3D printer — use a limited set of materials to create a much more diverse set of characteristics for 3D-printed objects,” says Mustafa Doğa Doğan PhD ’24, co-author of a paper on speed-modulated ironing.

This project is a collaboration between the research groups of Zjenja Doubrovski, assistant professor at TU Delft, and Stefanie Mueller, the TIBCO Career Development Professor in the Department of Electrical Engineering and Computer Science (EECS) at MIT and a member of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). Doğan worked closely with lead author Mehmet Ozdemir of TU Delft; Marwa AlAlawi, a mechanical engineering graduate student at MIT; and Jose Martinez Castro of TU Delft. The research will be presented at the ACM Symposium on User Interface Software and Technology.

Modulating speed to control temperature

The researchers launched the project to explore better ways to achieve multiproperty 3D printing with a single material. The use of heat-responsive filaments was promising, but most existing methods use a single nozzle to do printing and heating. The printer always needs to first heat the nozzle to the desired target temperature before depositing the material.

However, heating and cooling the nozzle takes a long time, and there is a danger that the filament in the nozzle might degrade as it reaches higher temperatures.

To prevent these problems, the team developed an ironing technique where material is printed using one nozzle, then activated by a second, empty nozzle which only reheats it. Instead of adjusting the temperature to trigger the material response, the researchers keep the temperature of the second nozzle constant and vary the speed at which it moves over the printed material, slightly touching the top of the layer.
 

Animation of rectangular iron sweeping top layer of printing block as infrared inset shows thermal activity.


“As we modulate the speed, that allows the printed layer we are ironing to reach different temperatures. It is similar to what happens if you move your finger over a flame. If you move it quickly, you might not be burned, but if you drag it across the flame slowly, your finger will reach a higher temperature,” AlAlawi says.

The MIT team collaborated with the TU Delft researchers to develop the theoretical model that predicts how fast the second nozzle must move to heat the material to a specific temperature.

The model correlates a material’s output temperature with its heat-responsive properties to determine the exact nozzle speed which will achieve certain colors, shades, or textures in the printed object.

“There are a lot of inputs that can affect the results we get. We are modeling something that is very complicated, but we also want to make sure the results are fine-grained,” AlAlawi says.

The team dug into scientific literature to determine proper heat transfer coefficients for a set of unique materials, which they built into their model. They also had to contend with an array of unpredictable variables, such as heat that may be dissipated by fans and the air temperature in the room where the object is being printed.

They incorporated the model into a user-friendly interface that simplifies the scientific process, automatically translating the pixels in a maker’s 3D model into a set of machine instructions that control the speed at which the object is printed and ironed by the dual nozzles.

Faster, finer fabrication

They tested their approach with three heat-responsive filaments. The first, a foaming polymer with particles that expand as they are heated, yields different shades, translucencies, and textures. They also experimented with a filament filled with wood fibers and one with cork fibers, both of which can be charred to produce increasingly darker shades.

The researchers demonstrated how their method could produce objects like water bottles that are partially translucent. To make the water bottles, they ironed the foaming polymer at low speeds to create opaque regions and higher speeds to create translucent ones. They also utilized the foaming polymer to fabricate a bike handle with varied roughness to improve a rider’s grip.

Trying to produce similar objects using traditional multimaterial 3D printing took far more time, sometimes adding hours to the printing process, and consumed more energy and material. In addition, speed-modulated ironing could produce fine-grained shade and texture gradients that other methods could not achieve.

In the future, the researchers want to experiment with other thermally responsive materials, such as plastics. They also hope to explore the use of speed-modulated ironing to modify the mechanical and acoustic properties of certain materials.


Uplifting West African communities, one cashew at a time

GRIA Food Company, founded by Joshua Reed-Diawuoh MBA ’20, ethically sources cashews from the region and sells them internationally to support local food economies.


Ever wonder how your favorite snack was sourced? Joshua Reed-Diawuoh thinks more people should.

Reed-Diawuoh MBA ’20 is the founder and CEO of GRIA Food Company, which partners with companies that ethically source and process food in West Africa to support local food economies and help communities in the region more broadly.

“It’s very difficult for these agribusinesses and producers to start sustainable businesses and build up that value chain in the area,” says Reed-Diawuoh, who started the company as a student in the MIT Sloan School of Management. “We want to support these companies that put in the work to build integrated businesses that are employing people and uplifting communities.”

GRIA, which stands for “Grown in Africa,” is currently selling six types of flavored cashews sourced from Benin, Togo, and Burkina Faso. All of the cashews are certified by Fairtrade International, which means in addition to offering sustainable wages, access to financing, and decent working conditions, the companies receive a “Fairtrade Premium” on top of the selling price that allows them to invest in the long-term health of their communities.

“That premium is transformational,” Reed-Diawuoh says. “The premium goes to the producer cooperatives, or the farmers working the land, and they can invest that in any way they choose. They can put it back into their business, they can start new community development projects, like building schools or improving wastewater infrastructure, whatever they want.”

Cracking the nut

Reed-Diawuoh’s family is from Ghana, and before coming to MIT Sloan, he worked to support agriculture and food manufacturing for countries in Sub-Saharan Africa, with particular focus on uplifting small-scale farmers. That’s where he learned about difficulties with financing and infrastructure constraints that held many companies back.

“I wanted to get my hands dirty and start my own business that contributed to improving agricultural development in West Africa,” Reed-Diawuoh says.

He entered MIT Sloan in 2018, taking entrepreneurship classes and exploring several business ideas before deciding to ethically source produce from farmers and sell directly to consumers. He says MIT Sloan’s Sustainability Business Lab offered particularly valuable lessons for how to structure his business.

In his second year, Reed-Diawuoh was selected for a fellowship at the Legatum Center, which connected him to other entrepreneurs working in emerging markets around the world.

“Legatum was a pivotal milestone for me,” he says. “It provided me with some structure and space to develop this idea. It also gave me an incredible opportunity to take risks and explore different business concepts in a way I couldn’t have done if I was working in industry.”

The business model Reed-Diawuoh settled on for GRIA sources product from agribusiness partners in West Africa that adhere to the strictest environmental and labor standards. Reed-Diawuoh decided to start with cashews because they have many manual processing steps — from shelling to peeling and roasting — that are often done after the cashews are shipped out of West Africa, limiting the growth of local food economies and taking wealth out of communities.

Each of GRIA’s partners, from the companies harvesting cashews to the processing facilities, works directly with farmer cooperatives and small-scale farmers and is certified by Fairtrade International.

“Without proper oversight and regulations, workers oftentimes get exploited, and child labor is a huge problem across the agriculture sector,” Reed-Diawuoh says. “Fairtrade certifications try and take a robust and rigorous approach to auditing all of the businesses and their supply chains, from producers to farmers to processors. They do on-site visits and they audit financial documents. We went through this over the course of a thorough three-month review.”

After importing cashew kernels, GRIA flavors and packages them at a production facility in Boston. Reed-Diawuoh started by selling to small independent retailers in Greater Boston before scaling up GRIA’s online sales. He started ramping up production in the beginning of 2023.

“Every time we sell our product, if people weren’t already familiar with Fairtrade or ethical sourcing, we provide information on our packaging and all of our collateral,” Reed-Diawuoh says. “We want to spread this message about the importance of ethical sourcing and the importance of building up food manufacturing in West Africa in particular, but also in rising economies throughout the world.”

Making ethical sourcing mainstream

GRIA currently imports about a ton of Fairtrade cashews and kernels each quarter, and Reed-Diawuoh hopes to double that number each year for the foreseeable future.

“For each pound, we pay premiums for the kernels, and that supports this ecosystem where producers get compensated fairly for their work on the land, and agribusinesses are able to build more robust and profitable business models, because they have an end market for these Fairtrade-certified products.”

Reed-Diawuoh is currently trying out different packaging and flavors and is in discussions with partners to expand production capacity and move into Ghana. He’s also exploring corporate collaborations and has provided MIT with product over the past two years for conferences and other events.

“We’re experimenting with different growth strategies,” Reed-Diawuoh says. “We’re very much still in startup mode, but really trying to ramp up our sales and production.”

As GRIA scales, Reed-Diawuoh hopes it pushes consumers to start asking more of their favorite food brands.

“It’s absolutely critical that, if we’re sourcing produce in markets like the U.S. from places like West Africa, we’re hyper-focused on doing it in an ethical manner,” Reed-Diawuoh says. “The overall goal of GRIA is to ensure we are adhering to and promoting strict sourcing standards and being rigorous and thoughtful about the way we import product.”


Jane-Jane Chen: A model scientist who inspires the next generation

A research scientist and internationally recognized authority in the field of blood cell development reflects on 45 years at MIT.


Growing up in Taiwan, Jane-Jane Chen excelled at math and science, which, at that time, were promoted heavily by the government, and were taught at a high level. Learning rudimentary English as well, the budding scientist knew she wanted to come to the United States to continue her studies, after she earned a bachelor of science in agricultural chemistry from the National Taiwan University in Taipei.

But the journey to becoming a respected scientist, with many years of notable National Institutes of Health (NIH) and National Science Foundation-funded research findings, would require Chen to be uncommonly determined, to move far from her childhood home, to overcome cultural obstacles — and to have the energy to be a trailblazer — in a field where barriers to being a woman in science were significantly higher than they are today.

Today, Chen is looking back on her journey, and on her long career as a principal research scientist at the MIT Institute for Medical Engineering and Science (IMES), a position from which she recently retired after 45 dedicated years.

At MIT, Chen established herself as an internationally recognized authority in the field of blood cell development — specifically red blood cells, says Lee Gehrke, the Hermann L.F. Helmholtz Professor and core faculty in IMES, professor of microbiology and immunobiology and health science and technology at Harvard Medical School, and one of the scientists Chen worked with most closely. 

“Red cells are essential because they carry oxygen to our cells and tissues, requiring iron in the form of a co-factor called heme,” Gehrke says. “Both insufficient heme availability and excess heme are detrimental to red cell development, and Dr. Chen explored the molecular mechanisms allowing cells to adapt to variable heme levels to maintain blood cell production.”

During her MIT career, Chen produced potent biochemistry research, working with heme-regulated eIF2 alpha kinase (which was discovered as the heme-regulated inhibitor of translation, HRI) and regulation of gene expression at translation relating to anemia, including:

“Dr. Chen’s signature discovery is the molecular cloning of the cDNA of the heme regulated inhibitor protein (HRI), a master regulatory protein in gene expression under stress and disease conditions,” Gehrke says, adding that Chen “subsequently devoted her career to defining a molecular and biochemical understanding of this key protein kinase” and that she “has also contributed several invited review articles on the subject of red cell development, and her papers are seminal contributions to her field.”

Forging her path

Shortly after graduating college, in 1973, Chen received a scholarship to come to California to study for her PhD in biochemistry at the School of Medicine of the University of Southern California. In Taiwan, Chen recalls, the demographic balance between male and female students was even, about 50 percent for each. Once she was in medical school in the United States, she found there were fewer female students, closer to 30 percent at that time, she recalls.

But she says she was fortunate to have important female mentors while at USC, including her PhD advisor, Mary Ellen Jones, a renowned biochemist who is notable for her discovery of carbamyl phosphate, a chemical substance that is key to the biosynthesis of both pyrimidine nucleotides, and arginine and urea. Jones, whom The New York Times called a “crucial researcher on DNA” and a foundational basic cancer researcher, had worked with eventual Nobel laureate Fritz Lipmann at Massachusetts General Hospital. 

When Chen arrived, while there were other Taiwanese students at USC, there were not many at the medical school. Chen says she bonded with a young female scientist and student from Hong Kong and with another female student who was Korean and Chinese, but who was born in America. Forming these friendships was crucial for blunting the isolation she could sometimes feel as a newcomer to America, particularly her connection with the American-born young woman: “She helped me a lot with getting used to the language,” and the culture, Chen says. “It was very hard to be so far away from my family and friends,” she adds. “It was the very first time I had left home. By coincidence, I had a very nice roommate who was not Chinese, but knew the Chinese language conversationally, so that was so lucky … I still have the letters that my parents wrote to me. I was the only girl, and the eldest child (Chen has three younger brothers), so it was hard for all of us.”

“Mostly, the culture I learned was in the lab,” Chen remembers. “I had to work a long day in the lab, and I knew it was such a great opportunity ­ — to go to seminars with professors to listen to speakers who had won, or would win, Nobel Prizes. My monthly living stipend was $300, so that had to stretch far. In my second year, more of my college friends had come to the USC and Caltech, and I began to have more interactions with other Taiwanese students who were studying here.”

Chen's first scientific discovery at Jones’ laboratory was that the fourth enzyme of the pyrimidine biosynthesis, dihydroorotate dehydrogenase, is localized in the inner membrane of the mitochondria. As it more recently turned out, this enzyme plays dual roles not only for pyrimidine biosynthesis, but also for cellular redox homeostasis, and has been demonstrated to be an important target for the development of cancer treatments.

Coming to MIT

After receiving her degree, Chen received a postdoctoral fellowship to work at the Roche Institute of Molecular Biology, in New Jersey, for nine months. In 1979, she married Zong-Long Liau, who was then working at MIT Lincoln Laboratory, from where he also recently retired. She accepted a postdoctoral position to continue her scientific training and pursuit at the laboratory of Irving M. London at MIT, and Jane-Jane and Zong-Long have lived in the Boston area ever since, raising two sons.

Looking back at her career, Chen says she is most proud of “being an established woman scientist with decades of NIH findings, and for being a mother of two wonderful sons.” During her time at MIT and IMES, she has worked with many renowned scientists, including Gehrke and London, professor of biology at MIT, professor of medicine at Harvard Medical School (HMS), founding director of the Harvard-MIT Program in Health Sciences and Technology (HST), and a recognized expert in molecular regulation of hemoglobin synthesis. She says that she is also in debt to the colleagues and collaborators at HMS and Children’s Hospital Boston for their scientific interests and support at the time when her research branched into the field of hematology, far different from her expertise in biochemistry. All of them are HST-educated physician scientists, including Stuart H. Orkin, Nancy C. Andrews, Mark D. Fleming, and Vijay G. Sankaran.

“We will miss Dr. Chen’s sage counsel on all matters scientific and communal,” says Elazer R. Edelman, the Edward J. Poitras Professor in Medical Engineering and Science, and the director of the Center for Clinical and Translational Research (CCTR), who was the director of IMES when Chen retired in June. “For generations, she has been an inspiration and guide to generations of students and established leaders across multiple communities — a model for all.”

She says her life in retirement “is a work in progress” — but she is working on a scientific review article, so that she can have “my last words on the research topics of my lab for the past 40 years.” Chen is pondering writing a memoir “reflecting on the journey of my life thus far, from Taiwan to MIT.” She also plans to travel to Taiwan more frequently, to better nurture and treasure the relationships with her three younger brothers, one of whom lives in Los Angeles.

She says that in looking back, she is grateful to have participated in a special grant application that was awarded from the National Science Foundation, aimed at helping women scientists to get their careers back on track after having a family. And she says she also remembers the advice of a female scientist in Jones’ lab during her last year of graduate study, who had stepped back from her research for a while after having two children, “She was not happy that she had done that, and she told me: Never drop out, try to always keep your hands in the research, and the work. So that is what I did.”


MIT Energy and Climate Club mobilizes future leaders to address global climate issues

One of the largest MIT clubs sees itself as “the umbrella of all things related to energy and climate on campus.”


One of MIT’s missions is helping to solve the world’s greatest problems — with a large focus on one of the most pressing topics facing the world today, climate change. The MIT Energy and Climate Club, (MITEC) formerly known as the MIT Energy Club, has been working since 2004 to inform and educate the entire MIT community about this urgent issue and other related matters.

MITEC, one of the largest clubs on campus, has hundreds of active members from every major, including both undergraduate and graduate students. With a broad reach across the Institute, MITEC is the hub for thought leadership and relationship-building across campus.

The club’s co-presidents Laurențiu Anton, doctoral candidate in electrical engineering and computer science; Rosie Keller, an MBA student in the MIT Sloan School of Management; and Thomas Lee, doctoral candidate in the Institute for Data, Systems, and Society, say that faculty, staff, and alumni are also welcome to join and interact with the continuously growing club.

While they closely collaborate on all aspects of the club, each of the co-presidents has a focus area to support the student managing directors and vice presidents for several of the club’s committees. Keller oversees the External Relations, Social, Launchpad, and Energy and Climate Hackathon leadership teams. Lee supports the leadership team for next spring’s Energy Conference. He also assists the club treasurer on budget and finance and guides the industry Sponsorships team. Anton oversees marketing, community and education as well as the Energy and Climate Night and Energy and Climate Career Fair leadership teams.

“We think of MITEC as the umbrella of all things related to energy and climate on campus. Our goal is to share actionable information and not just have discussions. We work with other organizations on campus, including the MIT Environmental Solutions Initiative, to bring awareness,” says Anton. “Our Community and Education team is currently working with the MIT ESI [Environmental Solutions Initiative] to create an ecosystem map that we’re excited to produce for the MIT community.”

To share their knowledge and get more people interested in solving climate and energy problems, each year MITEC hosts a variety of events including the MIT Energy and Climate Night, the MIT Energy and Climate Hack, the MIT Energy and Climate Career Fair, and the MIT Energy Conference to be held next spring March 3-4. The club also offers students the opportunity to gain valuable work experience while engaging with top companies, such as Constellation Energy and GE Vernova, on real climate and energy issues through their Launchpad Program.

Founded in 2006, the annual MIT Energy Conference is the largest student-run conference in North America focused on energy and climate issues, where hundreds of participants gather every year with the CEOs, policymakers, investors, and scholars at the forefront of the global energy transition.

“The 2025 MIT Energy Conference’s theme is ‘Breakthrough to Deployment: Driving Climate Innovation to Market’ — which focuses on the importance of both cutting-edge research innovation as well as large-scale commercial deployment to successfully reach climate goals,” says Lee.

Anton notes that the first of four MITEC flagship events the MIT Energy and Climate Night. This research symposium that takes place every year in the fall at the MIT Museum will be held on Nov. 8. The club invites a select number of keynote speakers and several dozen student posters. Guests are allowed to walk around and engage with students, and in return students get practice showcasing their research. The club’s career fair will take place in the spring semester, shortly after Independent Activities Period.

MITEC also provides members opportunities to meet with companies that are working to improve the energy sector, which helps to slow down, as well as adapt to, the effects of climate change.

“We recently went to Provincetown and toured Eversource’s battery energy storage facility. This helped open doors for club members,” says Keller. “The Provincetown battery helps address grid reliability problems after extreme storms on Cape Cod — which speaks to energy’s connection to both the mitigation and adaptation aspects of climate change,” adds Lee.

“MITEC is also a great way to meet other students at MIT that you might not otherwise have a chance to,” says Keller.

“We’d always welcome more undergraduate students to join MITEC. There are lots of leadership opportunities within the club for them to take advantage of and build their resumes. We also have good and growing collaboration between different centers on campus such as the Sloan Sustainability Initiative and the MIT Energy Initiative. They support us with resources, introductions, and help amplify what we're doing. But students are the drivers of the club and set the agendas,” says Lee.

All three co-presidents are excited to hear that MIT President Sally Kornbluth wants to bring climate change solutions to the next level, and that she recently launched The Climate Project at MIT to kick off the Institute’s major new effort to accelerate and scale up climate change solutions.

“We look forward to connecting with the new directors of the Climate Project at MIT and Interim Vice President for Climate Change Richard Lester in the near future. We are eager to explore how MITEC can support and collaborate with the Climate Project at MIT,” says Anton.

Lee, Keller, and Anton want MITEC to continue fostering solutions to climate issues. They emphasized that while individual actions like bringing your own thermos, using public transportation, or recycling are necessary, there’s a bigger picture to consider. They encourage the MIT community to think critically about the infrastructure and extensive supply chains behind the products everyone uses daily.

“It’s not just about bringing a thermos; it’s also understanding the life cycle of that thermos, from production to disposal, and how our everyday choices are interconnected with global climate impacts,” says Anton.

“Everyone should get involved with this worldwide problem. We’d like to see more people think about how they can use their careers for change. To think how they can navigate the type of role they can play — whether it’s in finance or on the technical side. I think exploring what that looks like as a career is also a really interesting way of thinking about how to get involved with the problem,” says Keller.

“MITEC’s newsletter reaches more than 4,000 people. We’re grateful that so many people are interested in energy and climate change,” says Anton.


The changing geography of “energy poverty”

Study of the U.S. shows homes in the South and Southwest could use more aid for energy costs, due to a growing need for air conditioning in a warming climate.


A growing portion of Americans who are struggling to pay for their household energy live in the South and Southwest, reflecting a climate-driven shift away from heating needs and toward air conditioning use, an MIT study finds.

The newly published research also reveals that a major U.S. federal program that provides energy subsidies to households, by assigning block grants to states, does not yet fully match these recent trends.

The work evaluates the “energy burden” on households, which reflects the percentage of income needed to pay for energy necessities, from 2015 to 2020. Households with an energy burden greater than 6 percent of income are considered to be in “energy poverty.” With climate change, rising temperatures are expected to add financial stress in the South, where air conditioning is increasingly needed. Meanwhile, milder winters are expected to reduce heating costs in some colder regions.

“From 2015 to 2020, there is an increase in burden generally, and you do also see this southern shift,” says Christopher Knittel, an MIT energy economist and co-author of a new paper detailing the study’s results. About federal aid, he adds, “When you compare the distribution of the energy burden to where the money is going, it’s not aligned too well.”

The paper, “U.S. federal resource allocations are inconsistent with concentrations of energy poverty,” is published today in Science Advances.

The authors are Carlos Batlle, a professor at Comillas University in Spain and a senior lecturer with the MIT Energy Initiative; Peter Heller SM ’24, a recent graduate of the MIT Technology and Policy Program; Knittel, the George P. Shultz Professor at the MIT Sloan School of Management and associate dean for climate and sustainability at MIT; and Tim Schittekatte, a senior lecturer at MIT Sloan.

A scorching decade

The study, which grew out of graduate research that Heller conducted at MIT, deploys a machine-learning estimation technique that the scholars applied to U.S. energy use data.

Specifically, the researchers took a sample of about 20,000 households from the U.S. Energy Information Administration’s Residential Energy Consumption Survey, which includes a wide variety of demographic characteristics about residents, along with building-type and geographic information. Then, using the U.S. Census Bureau’s American Community Survey data for 2015 and 2020, the research team estimated the average household energy burden for every census tract in the lower 48 states — 73,057 in 2015, and 84,414 in 2020.

That allowed the researchers to chart the changes in energy burden in recent years, including the shift toward a greater energy burden in southern states. In 2015, Maine, Mississippi, Arkansas, Vermont, and Alabama were the five states (ranked in descending order) with the highest energy burden across census bureau tracts. In 2020, that had shifted somewhat, with Maine and Vermont dropping on the list and southern states increasingly having a larger energy burden. That year, the top five states in descending order were Mississippi, Arkansas, Alabama, West Virginia, and Maine.

The data also reflect a urban-rural shift. In 2015, 23 percent of the census tracts where the average household is living in energy poverty were urban. That figure shrank to 14 percent by 2020.

All told, the data are consistent with the picture of a warming world, in which milder winters in the North, Northwest, and Mountain West require less heating fuel, while more extreme summer temperatures in the South require more air conditioning.

“Who’s going to be harmed most from climate change?” asks Knittel. “In the U.S., not surprisingly, it’s going to be the southern part of the U.S. And our study is confirming that, but also suggesting it’s the southern part of the U.S that’s least able to respond. If you’re already burdened, the burden’s growing.”

An evolution for LIHEAP?

In addition to identifying the shift in energy needs during the last decade, the study also illuminates a longer-term change in U.S. household energy needs, dating back to the 1980s. The researchers compared the present-day geography of U.S. energy burden to the help currently provided by the federal Low Income Home Energy Assistance Program (LIHEAP), which dates to 1981.

Federal aid for energy needs actually predates LIHEAP, but the current program was introduced in 1981, then updated in 1984 to include cooling needs such as air conditioning. When the formula was updated in 1984, two “hold harmless” clauses were also adopted, guaranteeing states a minimum amount of funding.

Still, LIHEAP’s parameters also predate the rise of temperatures over the last 40 years, and the current study shows that, compared to the current landscape of energy poverty, LIHEAP distributes relatively less of its funding to southern and southwestern states.

“The way Congress uses formulas set in the 1980s keeps funding distributions nearly the same as it was in the 1980s,” Heller observes. “Our paper illustrates the shift in need that has occurred over the decades since then.”

Currently, it would take a fourfold increase in LIHEAP to ensure that no U.S. household experiences energy poverty. But the researchers tested out a new funding design, which would help the worst-off households first, nationally, ensuring that no household would have an energy burden of greater than 20.3 percent.

“We think that’s probably the most equitable way to allocate the money, and by doing that, you now have a different amount of money that should go to each state, so that no one state is worse off than the others,” Knittel says.

And while the new distribution concept would require a certain amount of subsidy reallocation among states, it would be with the goal of helping all households avoid a certain level of energy poverty, across the country, at a time of changing climate, warming weather, and shifting energy needs in the U.S.

“We can optimize where we spend the money, and that optimization approach is an important thing to think about,” Knittel says. 


Institute Professor Emeritus John Little, a founder of operations research and marketing science, dies at 96

The MIT Sloan scholar was a part of the Institute community for nearly eight decades.


MIT Institute Professor Emeritus John D.C. Little ’48, PhD ’55, an inventive scholar whose work significantly influenced operations research and marketing, died on Sept. 27, at age 96. Having entered MIT as an undergraduate in 1945, he was part of the Institute community over a span of nearly 80 years and served as a faculty member at the MIT Sloan School of Management since 1962.

Little’s career was characterized by innovative computing work, an interdisciplinary and expansive research agenda, and research that was both theoretically robust and useful in practical terms for business managers. Little had a strong commitment to supporting and mentoring others at the Institute, and played a key role in helping shape the professional societies in his fields, such as the Institute for Operations Research and the Management Sciences (INFORMS).

He may be best known for his formulation of “Little’s Law,” a concept applied in operations research that generalizes the dynamics of queuing. Broadly, the theorem, expressed as L = λW, states that the number of customers or others waiting in a line equals their arrival rate multiplied by their average time spent in the system. This result can be applied to many systems, from manufacturing to health care to customer service, and helps quantify and fix business bottlenecks, among other things.

Little is widely considered to have been instrumental in the development of both operations research and marketing science, where he also made a range of advances, starting in the 1960s. Drawing on innovations in computer modeling, he analyzed a broad range of issues in marketing, from customer behavior and brand loyalty to firm-level decisions, often about advertising deployment strategy. Little’s research methods evolved to incorporate the new streams of data that information technology increasingly made available, such as the purchasing information obtained from barcodes.

“John Little was a mentor and friend to so many of us at MIT and beyond,” says Georgia Perakis, the interim John C. Head III Dean of MIT Sloan. “He was also a pioneer — as the first doctoral student in the field of operations research, as the founder of the Marketing Group at MIT Sloan, and with his research, including Little’s Law, published in 1961. Many of us at MIT Sloan are lucky to have followed in John’s footsteps, learning from his research and his leadership both at the school and in many professional organizations, including the INFORMS society where he served as its first president. I am grateful to have known and learned from John myself.”

Little’s longtime colleagues in the marketing group at MIT Sloan shared those sentiments.

“John was truly an academic giant with pioneering work in queuing, optimization, decision sciences, and marketing science,” says Stephen Graves, the Abraham J. Siegel Professor Post Tenure of Management at MIT Sloan. “He also was an exceptional academic leader, being very influential in the shaping and strengthening of the professional societies for operations research and for marketing science. And he was a remarkable person as a mentor and colleague, always caring, thoughtful, wise, and with a New England sense of humor.”

John Dutton Conant Little was born in Boston and grew up in Andover, Massachusetts. At MIT he majored in physics and edited the campus’ humor magazine. Working at General Electric after graduation, he met his future wife, Elizabeth Alden PhD ’54; they both became doctoral students in physics at MIT, starting in 1951.

Alden studied ferroelectric materials, which exhibit complex properties of polarization, and produced a thesis titled, “The Dynamic Behavior of Domain Walls in Barium Titanate,” working with Professor Arthur R. von Hippel. Little, advised by Professor Philip Morse, used MIT’s famous Whirlwind I computer for his dissertation work. His thesis, titled “Use of Storage Water in a Hydroelectric System,” modeled the optimally low-cost approach to distributing water held by dams. It was a thesis in both physics and operations research, and appears to be the first one ever granted in operations research.

Little then served in the U.S. Army and spent five years on the faculty at what is now Case Western Reserve University, before returning to the Institute in 1962 as an associate professor of operations research and management at MIT Sloan. Having worked at the leading edge of using computing to tackle operations problems, Little began applying computer modeling to marketing questions. His research included models of consumer choice and promotional spending, among other topics.

Little published several dozen scholarly papers across operations research and marketing, as well as co-editing, along with Robert C. Blattberg and Rashi Glazer, a 1974 book, “The Marketing Information Revolution,” published by Harvard Business School Press. Ever the wide-ranging scholar, he even published several studies about optimizing traffic signals and traffic flow.

Still, in addition to Little’s Law, some of his key work came from studies in marketing and management. In an influential 1970 paper in Management Science,  Little outlined the specifications that a good data-driven management model should have, emphasizing that business leaders should be given tools they could thoroughly grasp.

In a 1979 paper in Operations Research, Little described the elements needed to develop a robust model of ad expenditures for businesses, such as the geographic distribution of spending, and a firm’s spending over time. And in a 1983 paper with Peter Guadagni, published in Marketing Science, Little used the advent of scanner data for consumer goods to build a powerful model of consumer behavior and brand loyalty, which has remained influential.

Separate though these topics might be, Little always sought to explain the dynamics at work in each case. As a scholar, he “had the vision to perceive marketing as source of interesting and relevant unexplored opportunities for OR [operations research] and management science,” wrote Little’s MIT colleagues John Hauser and Glen Urban in a biographical chapter about him, “Profile of John D.C. Little," for the book “Profiles in Operations Research,” published in 2011. In it, Hauser and Urban detail the lasting contributions these papers and others made.

By 1967, Little had co-founded the firm Management Decisions Systems, which modeled marketing problems for major companies and was later purchased by Information Resources, Inc. on whose board Little served.

In 1989, Little was named Institute Professor, MIT’s highest faculty honor. He had previously served as director of the MIT Operations Research Center. At MIT Sloan he was the former head of the Management Science Area and the Behavioral and Policy Sciences Area.

For all his productivity as a scholar, Little also served as a valued mentor to many, while opening his family home outside of Boston to overseas-based faculty and students for annual Thanksgiving dinners. He also took pride in encouraging women to enter management and academia. In just one example, he was the principal faculty advisor for the late Asha Seth Kapadia SM ’65, one of the first international and female students at Sloan, who studied queuing theory and later became a longtime professor at the University of Texas School of Public Health.

Additionally, current MIT Sloan professor Juanjuan Zhang credits Little for inspiring her interest in the field; today Zhang is the John D.C. Little Professor of Marketing at MIT Sloan.

"John was a larger-than-life person," Zhang says. "His foundational work transformed marketing from art, to art, science, and engineering, making it a process that ordinary people can follow to succeed. He democratized marketing.”

Little’s presence as an innovative, interdisciplinary scholar who also encouraged others to pursue their own work is fundamental to the way he is remembered at MIT.

“John pioneered in operations research at MIT and is widely known for Little’s Law, but he did even more work in marketing science,” said Urban, an emeritus dean of MIT Sloan and the David Austin Professor in Marketing, Emeritus. “He founded the field of operations research modeling in marketing, with analytic work on adaptive advertising, and did fundamental work on marketing response. He was true to our MIT philosophy of “mens et manus” [“mind and hand”] as he proposed that models should be usable by managers as well as being theoretically strong. Personally, John hired me as an assistant professor in 1966 and supported my work in the following 55 years at MIT. I am grateful to him, and sad to lose a friend and mentor.”

Hauser, the Kirin Professor of Marketing at MIT Sloan, added: “John made seminal contributions to many fields from operations to management science to founding marketing science. More importantly, he was a unique colleague who mentored countless faculty and students and who, by example, led with integrity and wit. I, and many others, owe our love of operations research and marketing science to John.”

In recognition of his scholarship, Little was elected to the National Academy of Engineering, and was a fellow of the American Association for the Advancement of Science. Among other honors, the American Marketing Association gave Little its Charles Parlin Award for contributions to the practice of marketing research, in 1979, and its Paul D. Converse Award for lifetime achievement, in 1992. Little was the first president of INFORMS, which honored him with its George E. Kimball Medal. Little was also president of The Institute of Management Sciences (TIMS), and the Operations Research Society of America (ORSA).

An avid jogger, biker, and seafood chef, Little was dedicated to his family. He is predeceased by his wife, Elizabeth, and his two sisters, Margaret and Francis. Little is survived by his children Jack, Sarah, Thomas, and Ruel; eight grandchildren; and two great-grandchildren. Arrangements for a memorial service have been entrusted to the Dee Funeral Home in Concord, Massachusetts. 


Study finds mercury pollution from human activities is declining

Models show that an unexpected reduction in human-driven emissions led to a 10 percent decline in atmospheric mercury concentrations.


MIT researchers have some good environmental news: Mercury emissions from human activity have been declining over the past two decades, despite global emissions inventories that indicate otherwise.

In a new study, the researchers analyzed measurements from all available monitoring stations in the Northern Hemisphere and found that atmospheric concentrations of mercury declined by about 10 percent between 2005 and 2020.

They used two separate modeling methods to determine what is driving that trend. Both techniques pointed to a decline in mercury emissions from human activity as the most likely cause.

Global inventories, on the other hand, have reported opposite trends. These inventories estimate atmospheric emissions using models that incorporate average emission rates of polluting activities and the scale of these activities worldwide.

“Our work shows that it is very important to learn from actual, on-the-ground data to try and improve our models and these emissions estimates. This is very relevant for policy because, if we are not able to accurately estimate past mercury emissions, how are we going to predict how mercury pollution will evolve in the future?” says Ari Feinberg, a former postdoc in the Institute for Data, Systems, and Society (IDSS) and lead author of the study.

The new results could help inform scientists who are embarking on a collaborative, global effort to evaluate pollution models and develop a more in-depth understanding of what drives global atmospheric concentrations of mercury.

However, due to a lack of data from global monitoring stations and limitations in the scientific understanding of mercury pollution, the researchers couldn’t pinpoint a definitive reason for the mismatch between the inventories and the recorded measurements.

“It seems like mercury emissions are moving in the right direction, and could continue to do so, which is heartening to see. But this was as far as we could get with mercury. We need to keep measuring and advancing the science,” adds co-author Noelle Selin, an MIT professor in the IDSS and the Department of Earth, Atmospheric and Planetary Sciences (EAPS).

Feinberg and Selin, his MIT postdoctoral advisor, are joined on the paper by an international team of researchers that contributed atmospheric mercury measurement data and statistical methods to the study. The research appears this week in the Proceedings of the National Academy of Sciences.

Mercury mismatch

The Minamata Convention is a global treaty that aims to cut human-caused emissions of mercury, a potent neurotoxin that enters the atmosphere from sources like coal-fired power plants and small-scale gold mining.

The treaty, which was signed in 2013 and went into force in 2017, is evaluated every five years. The first meeting of its conference of parties coincided with disheartening news reports that said global inventories of mercury emissions, compiled in part from information from national inventories, had increased despite international efforts to reduce them.

This was puzzling news for environmental scientists like Selin. Data from monitoring stations showed atmospheric mercury concentrations declining during the same period.

Bottom-up inventories combine emission factors, such as the amount of mercury that enters the atmosphere when coal mined in a certain region is burned, with estimates of pollution-causing activities, like how much of that coal is burned in power plants.

“The big question we wanted to answer was: What is actually happening to mercury in the atmosphere and what does that say about anthropogenic emissions over time?” Selin says.

Modeling mercury emissions is especially tricky. First, mercury is the only metal that is in liquid form at room temperature, so it has unique properties. Moreover, mercury that has been removed from the atmosphere by sinks like the ocean or land can be re-emitted later, making it hard to identify primary emission sources.

At the same time, mercury is more difficult to study in laboratory settings than many other air pollutants, especially due to its toxicity, so scientists have limited understanding of all chemical reactions mercury can undergo. There is also a much smaller network of mercury monitoring stations, compared to other polluting gases like methane and nitrous oxide.

“One of the challenges of our study was to come up with statistical methods that can address those data gaps, because available measurements come from different time periods and different measurement networks,” Feinberg says.

Multifaceted models

The researchers compiled data from 51 stations in the Northern Hemisphere. They used statistical techniques to aggregate data from nearby stations, which helped them overcome data gaps and evaluate regional trends.

By combining data from 11 regions, their analysis indicated that Northern Hemisphere atmospheric mercury concentrations declined by about 10 percent between 2005 and 2020.

Then the researchers used two modeling methods — biogeochemical box modeling and chemical transport modeling — to explore possible causes of that decline.  Box modeling was used to run hundreds of thousands of simulations to evaluate a wide array of emission scenarios. Chemical transport modeling is more computationally expensive but enables researchers to assess the impacts of meteorology and spatial variations on trends in selected scenarios.

For instance, they tested one hypothesis that there may be an additional environmental sink that is removing more mercury from the atmosphere than previously thought. The models would indicate the feasibility of an unknown sink of that magnitude.

“As we went through each hypothesis systematically, we were pretty surprised that we could really point to declines in anthropogenic emissions as being the most likely cause,” Selin says.

Their work underscores the importance of long-term mercury monitoring stations, Feinberg adds. Many stations the researchers evaluated are no longer operational because of a lack of funding.

While their analysis couldn’t zero in on exactly why the emissions inventories didn’t match up with actual data, they have a few hypotheses.

One possibility is that global inventories are missing key information from certain countries. For instance, the researchers resolved some discrepancies when they used a more detailed regional inventory from China. But there was still a gap between observations and estimates.

They also suspect the discrepancy might be the result of changes in two large sources of mercury that are particularly uncertain: emissions from small-scale gold mining and mercury-containing products.

Small-scale gold mining involves using mercury to extract gold from soil and is often performed in remote parts of developing countries, making it hard to estimate. Yet small-scale gold mining contributes about 40 percent of human-made emissions.

In addition, it’s difficult to determine how long it takes the pollutant to be released into the atmosphere from discarded products like thermometers or scientific equipment.

“We’re not there yet where we can really pinpoint which source is responsible for this discrepancy,” Feinberg says.

In the future, researchers from multiple countries, including MIT, will collaborate to study and improve the models they use to estimate and evaluate emissions. This research will be influential in helping that project move the needle on monitoring mercury, he says.

This research was funded by the Swiss National Science Foundation, the U.S. National Science Foundation, and the U.S. Environmental Protection Agency.


Bubble findings could unlock better electrode and electrolyzer designs

A new study of bubbles on electrode surfaces could help improve the efficiency of electrochemical processes that produce fuels, chemicals, and materials.


Industrial electrochemical processes that use electrodes to produce fuels and chemical products are hampered by the formation of bubbles that block parts of the electrode surface, reducing the area available for the active reaction. Such blockage reduces the performance of the electrodes by anywhere from 10 to 25 percent.

But new research reveals a decades-long misunderstanding about the extent of that interference. The findings show exactly how the blocking effect works and could lead to new ways of designing electrode surfaces to minimize inefficiencies in these widely used electrochemical processes.

It has long been assumed that the entire area of the electrode shadowed by each bubble would be effectively inactivated. But it turns out that a much smaller area — roughly the area where the bubble actually contacts the surface — is blocked from its electrochemical activity. The new insights could lead directly to new ways of patterning the surfaces to minimize the contact area and improve overall efficiency.

The findings are reported today in the journal Nanoscale, in a paper by recent MIT graduate Jack Lake PhD ’23, graduate student Simon Rufer, professor of mechanical engineering Kripa Varanasi, research scientist Ben Blaiszik, and six others at the University of Chicago and Argonne National Laboratory. The team has made available an open-source, AI-based software tool that engineers and scientists can now use to automatically recognize and quantify bubbles formed on a given surface, as a first step toward controlling the electrode material’s properties.

Gas-evolving electrodes, often with catalytic surfaces that promote chemical reactions, are used in a wide variety of processes, including the production of “green” hydrogen without the use of fossil fuels, carbon-capture processes that can reduce greenhouse gas emissions, aluminum production, and the chlor-alkali process that is used to make widely used chemical products.

These are very widespread processes. The chlor-alkali process alone accounts for 2 percent of all U.S. electricity usage; aluminum production accounts for 3 percent of global electricity; and both carbon capture and hydrogen production are likely to grow rapidly in coming years as the world strives to meet greenhouse-gas reduction targets. So, the new findings could make a real difference, Varanasi says.

“Our work demonstrates that engineering the contact and growth of bubbles on electrodes can have dramatic effects” on how bubbles form and how they leave the surface, he says. “The knowledge that the area under bubbles can be significantly active ushers in a new set of design rules for high-performance electrodes to avoid the deleterious effects of bubbles.”

“The broader literature built over the last couple of decades has suggested that not only that small area of contact but the entire area under the bubble is passivated,” Rufer says. The new study reveals “a significant difference between the two models because it changes how you would develop and design an electrode to minimize these losses.”

To test and demonstrate the implications of this effect, the team produced different versions of electrode surfaces with patterns of dots that nucleated and trapped bubbles at different sizes and spacings. They were able to show that surfaces with widely spaced dots promoted large bubble sizes but only tiny areas of surface contact, which helped to make clear the difference between the expected and actual effects of bubble coverage.

Developing the software to detect and quantify bubble formation was necessary for the team’s analysis, Rufer explains. “We wanted to collect a lot of data and look at a lot of different electrodes and different reactions and different bubbles, and they all look slightly different,” he says. Creating a program that could deal with different materials and different lighting and reliably identify and track the bubbles was a tricky process, and machine learning was key to making it work, he says.

Using that tool, he says, they were able to collect “really significant amounts of data about the bubbles on a surface, where they are, how big they are, how fast they’re growing, all these different things.” The tool is now freely available for anyone to use via the GitHub repository.

By using that tool to correlate the visual measures of bubble formation and evolution with electrical measurements of the electrode’s performance, the researchers were able to disprove the accepted theory and to show that only the area of direct contact is affected. Videos further proved the point, revealing new bubbles actively evolving directly under parts of a larger bubble.

The researchers developed a very general methodology that can be applied to characterize and understand the impact of bubbles on any electrode or catalyst surface. They were able to quantify the bubble passivation effects in a new performance metric they call BECSA (Bubble-induced electrochemically active surface), as opposed to ECSA (electrochemically active surface area), that is used in the field. “The BECSA metric was a concept we defined in an earlier study but did not have an effective method to estimate until this work,” says Varanasi.

The knowledge that the area under bubbles can be significantly active ushers in a new set of design rules for high-performance electrodes. This means that electrode designers should seek to minimize bubble contact area rather than simply bubble coverage, which can be achieved by controlling the morphology and chemistry of the electrodes. Surfaces engineered to control bubbles can not only improve the overall efficiency of the processes and thus reduce energy use, they can also save on upfront materials costs. Many of these gas-evolving electrodes are coated with catalysts made of expensive metals like platinum or iridium, and the findings from this work can be used to engineer electrodes to reduce material wasted by reaction-blocking bubbles.

Varanasi says that “the insights from this work could inspire new electrode architectures that not only reduce the usage of precious materials, but also improve the overall electrolyzer performance,” both of which would provide large-scale environmental benefits.

The research team included Jim James, Nathan Pruyne, Aristana Scourtas, Marcus Schwarting, Aadit Ambalkar, Ian Foster, and Ben Blaiszik at the University of Chicago and Argonne National Laboratory. The work was supported by the U.S. Department of Energy under the ARPA-E program. This work made use of the MIT.nano facilities.


Solar-powered desalination system requires no extra batteries

Because it doesn’t need expensive energy storage for times without sunshine, the technology could provide communities with drinking water at low costs.


MIT engineers have built a new desalination system that runs with the rhythms of the sun.

The solar-powered system removes salt from water at a pace that closely follows changes in solar energy. As sunlight increases through the day, the system ramps up its desalting process and automatically adjusts to any sudden variation in sunlight, for example by dialing down in response to a passing cloud or revving up as the skies clear.

Because the system can quickly react to subtle changes in sunlight, it maximizes the utility of solar energy, producing large quantities of clean water despite variations in sunlight throughout the day. In contrast to other solar-driven desalination designs, the MIT system requires no extra batteries for energy storage, nor a supplemental power supply, such as from the grid.

The engineers tested a community-scale prototype on groundwater wells in New Mexico over six months, working in variable weather conditions and water types. The system harnessed on average over 94 percent of the electrical energy generated from the system’s solar panels to produce up to 5,000 liters of water per day despite large swings in weather and available sunlight.

“Conventional desalination technologies require steady power and need battery storage to smooth out a variable power source like solar. By continually varying power consumption in sync with the sun, our technology directly and efficiently uses solar power to make water,” says Amos Winter, the Germeshausen Professor of Mechanical Engineering and director of the K. Lisa Yang Global Engineering and Research (GEAR) Center at MIT. “Being able to make drinking water with renewables, without requiring battery storage, is a massive grand challenge. And we’ve done it.”

The system is geared toward desalinating brackish groundwater — a salty source of water that is found in underground reservoirs and is more prevalent than fresh groundwater resources. The researchers see brackish groundwater as a huge untapped source of potential drinking water, particularly as reserves of fresh water are stressed in parts of the world. They envision that the new renewable, battery-free system could provide much-needed drinking water at low costs, especially for inland communities where access to seawater and grid power are limited.

“The majority of the population actually lives far enough from the coast, that seawater desalination could never reach them. They consequently rely heavily on groundwater, especially in remote, low-income regions. And unfortunately, this groundwater is becoming more and more saline due to climate change,” says Jonathan Bessette, MIT PhD student in mechanical engineering. “This technology could bring sustainable, affordable clean water to underreached places around the world.”

The researchers report details the new system in a paper appearing today in Nature Water. The study’s co-authors are Bessette, Winter, and staff engineer Shane Pratt.

Pump and flow

The new system builds on a previous design, which Winter and his colleagues, including former MIT postdoc Wei He, reported earlier this year. That system aimed to desalinate water through “flexible batch electrodialysis.”

Electrodialysis and reverse osmosis are two of the main methods used to desalinate brackish groundwater. With reverse osmosis, pressure is used to pump salty water through a membrane and filter out salts. Electrodialysis uses an electric field to draw out salt ions as water is pumped through a stack of ion-exchange membranes.

Scientists have looked to power both methods with renewable sources. But this has been especially challenging for reverse osmosis systems, which traditionally run at a steady power level that’s incompatible with naturally variable energy sources such as the sun.

Winter, He, and their colleagues focused on electrodialysis, seeking ways to make a more flexible, “time-variant” system that would be responsive to variations in renewable, solar power.

In their previous design, the team built an electrodialysis system consisting of water pumps, an ion-exchange membrane stack, and a solar panel array. The innovation in this system was a model-based control system that used sensor readings from every part of the system to predict the optimal rate at which to pump water through the stack and the voltage that should be applied to the stack to maximize the amount of salt drawn out of the water.

When the team tested this system in the field, it was able to vary its water production with the sun’s natural variations. On average, the system directly used 77 percent of the available electrical energy produced by the solar panels, which the team estimated was 91 percent more than traditionally designed solar-powered electrodialysis systems.

Still, the researchers felt they could do better.

“We could only calculate every three minutes, and in that time, a cloud could literally come by and block the sun,” Winter says. “The system could be saying, ‘I need to run at this high power.’ But some of that power has suddenly dropped because there’s now less sunlight. So, we had to make up that power with extra batteries.”

Solar commands

In their latest work, the researchers looked to eliminate the need for batteries, by shaving the system’s response time to a fraction of a second. The new system is able to update its desalination rate, three to five times per second. The faster response time enables the system to adjust to changes in sunlight throughout the day, without having to make up any lag in power with additional power supplies.

The key to the nimbler desalting is a simpler control strategy, devised by Bessette and Pratt. The new strategy is one of “flow-commanded current control,” in which the system first senses the amount of solar power that is being produced by the system’s solar panels. If the panels are generating more power than the system is using, the controller automatically “commands” the system to dial up its pumping, pushing more water through the electrodialysis stacks. Simultaneously, the system diverts some of the additional solar power by increasing the electrical current delivered to the stack, to drive more salt out of the faster-flowing water.

“Let’s say the sun is rising every few seconds,” Winter explains. “So, three times a second, we’re looking at the solar panels and saying, ‘Oh, we have more power — let’s bump up our flow rate and current a little bit.’ When we look again and see there’s still more excess power, we’ll up it again. As we do that, we’re able to closely match our consumed power with available solar power really accurately, throughout the day. And the quicker we loop this, the less battery buffering we need.”

The engineers incorporated the new control strategy into a fully automated system that they sized to desalinate brackish groundwater at a daily volume that would be enough to supply a small community of about 3,000 people. They operated the system for six months on several wells at the Brackish Groundwater National Desalination Research Facility in Alamogordo, New Mexico. Throughout the trial, the prototype operated under a wide range of solar conditions, harnessing over 94 percent of the solar panel’s electrical energy, on average, to directly power desalination.

“Compared to how you would traditionally design a solar desal system, we cut our required battery capacity by almost 100 percent,” Winter says.

The engineers plan to further test and scale up the system in hopes of supplying larger communities, and even whole municipalities, with low-cost, fully sun-driven drinking water.

“While this is a major step forward, we’re still working diligently to continue developing lower cost, more sustainable desalination methods,” Bessette says.

“Our focus now is on testing, maximizing reliability, and building out a product line that can provide desalinated water using renewables to multiple markets around the world," Pratt adds.

The team will be launching a company based on their technology in the coming months.

This research was supported in part by the National Science Foundation, the Julia Burke Foundation, and the MIT Morningside Academy of Design. This work was additionally supported in-kind by Veolia Water Technologies and Solutions and Xylem Goulds. 


Teen uses pharmacology learned through MIT OpenCourseWare to extract and study medicinal properties of plants

Inspired by traditional medicine, 17-year-old Tomás Orellana is on a mission to identify plants that can help treat students’ health issues.


Tomás Orellana, a 17-year-old high school student in Chile, had a vision: to create a kit of medicinal plants for Chilean school infirmaries. But first, he needed to understand the basic principles of pharmacology. That’s when Orellana turned to the internet and stumbled upon a gold mine of free educational resources and courses on the MIT OpenCourseWare website.

Right away, Orellana completed class HST.151 (Principles of Pharmacology), learning about the mechanisms of drug action, dose-response relations, pharmacokinetics, drug delivery systems, and more. He then shared this newly acquired knowledge with 16 members of his school science group so that together they could make Orellana’s vision a reality.

“I used the course to guide my classmates in the development of a phyto-medicinal school project, demonstrating in practice the innovation that the OpenCourseWare platform offers,” Orellana says in Spanish. “Thanks to the pharmacology course, I can collect and synthesize the information we need to learn to prepare the medicines for our project.”

OpenCourseWare, part of MIT Open Learning, offers free educational resources on its website from more than 2,500 courses that span the MIT curriculum, from introductory to advanced classes. A global model for open sharing in higher education, OpenCourseWare has an open license that allows the remix and reuse of its educational resources, which include video lectures, syllabi, lecture notes, problem sets, assignments, audiovisual content, and insights.

After completing the Principles of Pharmacology course, Orellana and members of his science group began extracting medicinal properties from plants, such as cedron, and studying them in an effort to determine which plants are best to grow in a school environment. Their goal, Orellana says, is to help solve students’ health problems during the school day, including menstrual, mental, intestinal, and respiratory issues.

“There is a tradition regarding the use of medicinal plants, but there is no scientific evidence that says that these properties really exist,” the 11th-grader explains. “What we want to do is know which plants are the best to grow in a school environment.”

Orellana’s science group discussed their scientific project on “Que Sucede,” a Chilean television show, and their interview will air soon. The group plans to continue working on their medicinal project during this academic year.

Next up on Orellana’s learning journey is the mysteries of the human brain. He plans to complete class 9.01 (Introduction to Neuroscience) through OpenCourseWare. His ultimate goal? To pursue a career in health sciences and become a professor so that he may continue to share knowledge — widely.

“I dream of becoming a university academic to have an even greater impact on current affairs in my country and internationally,” Orellana says. “All that will happen if I try hard enough.”

Orellana encourages learners to explore MIT Open Learning's free educational resources, including OpenCourseWare.

“Take advantage of MIT's free digital technologies and tools,” he says. “Keep an open mind as to how the knowledge can be applied.”


Applying risk and reliability analysis across industries

After an illustrious career at Idaho National Laboratory spanning three decades, Curtis Smith is now sharing his expertise in risk analysis and management with future generations of engineers at MIT.


On Feb. 1, 2003, the space shuttle Columbia disintegrated as it returned to Earth, killing all seven astronauts on board. The tragic incident compelled NASA to amp up their risk safety assessments and protocols. They knew whom to call: Curtis Smith PhD ’02, who is now the KEPCO Professor of the Practice of Nuclear Science and Engineering at MIT.

The nuclear community has always been a leader in probabilistic risk analysis and Smith’s work in risk-related research had made him an established expert in the field. When NASA came knocking, Smith had been working for the Nuclear Regulatory Commission (NRC) at the Idaho National Laboratory (INL). He pivoted quickly. For the next decade, Smith worked with NASA’s Office of Safety and Mission Assurance supporting their increased use of risk analysis. It was a software tool that Smith helped develop, SAPHIRE, that NASA would adopt to bolster its own risk analysis program.

At MIT, Smith’s focus is on both sides of system operation: risk and reliability. A research project he has proposed involves evaluating the reliability of 3D-printed components and parts for nuclear reactors.

Growing up in Idaho

MIT is a distance from where Smith grew up on the Shoshone-Bannock Native American reservation in Fort Hall, Idaho. His father worked at a chemical manufacturing plant, while his mother and grandmother operated a small restaurant on the reservation.

Southeast Idaho had a significant population of migrant workers and Smith grew up with a diverse group of friends, mostly Native American and Hispanic. “It was a largely positive time and set a worldview for me in many wonderful ways,” Smith remembers. When he was a junior in high school, the family moved to Pingree, Idaho, a small town of barely 500. Smith attended Snake River High, a regional school, and remembered the deep impact his teachers had. “I learned a lot in grade school and had great teachers, so my love for education probably started there. I tried to emulate my teachers,” Smith says.

Smith went to Idaho State University in Pocatello for college, a 45-minute drive from his family. Drawn to science, he decided he wanted to study a subject that would benefit humanity the most: nuclear engineering. Fortunately, Idaho State has a strong nuclear engineering program. Smith completed a master’s degree in the same field at ISU while working for the Federal Bureau of Investigation in the security department during the swing shift — 5 p.m. to 1 a.m. — at the FBI offices in Pocatello. “It was a perfect job while attending grad school,” Smith says.

His KEPCO Professor of the Practice appointment is the second stint for Smith at MIT: He completed his PhD in the Department of Nuclear Science and Engineering (NSE) under the advisement of Professor George Apostolakis in 2002.

A career in risk analysis and management

After a doctorate at MIT, Smith returned to Idaho, conducting research in risk analysis for the NRC. He also taught technical courses and developed risk analysis software. “We did a whole host of work that supported the current fleet of nuclear reactors that we have,” Smith says.

He was 10 years into his career at INL when NASA recruited him, leaning on his expertise in risk analysis to translate it into space missions. “I didn’t really have a background in aerospace, but I was able to bring all the engineering I knew, conducting risk analysis for nuclear missions. It was really exciting and I learned a lot about aerospace,” Smith says.

Risk analysis uses statistics and data to answer complex questions involving safety. Among his projects: analyzing the risk involved in a Mars rover mission with a radioisotope-generated power source for the rover. Even if the necessary plutonium is encased in really strong material, calculations for risk have to factor in all eventualities, including the rocket blowing up.

When the Fukushima incident happened in 2011, the Department of Energy (DoE) was more supportive of safety and risk analysis research. Smith found himself in the center of the action again, supporting large DoE research programs. He then moved to become the director of the Nuclear Safety and Regulatory Research Division at the INL. Smith found he loved the role, mentoring and nurturing the careers of a diverse set of scientists. “It turned out to be much more rewarding than I had expected,” Smith says. Under his leadership, the division grew from 45 to almost 90 research staff and won multiple national awards.

Return to MIT

MIT NSE came calling in 2022, looking to fill the position of professor of the practice, an offer Smith couldn’t refuse. The department was looking to bulk up its risk and reliability offerings and Smith made a great fit. The DoE division he had been supervising had grown wings enough for Smith to seek out something new.

“Just getting back to Boston is exciting,” Smith says. The last go-around involved bringing the family to the city and included a lot of sleepless nights. Smith’s wife, Jacquie, is also excited about being closer to the New England fan base. The couple has invested in season tickets for the Patriots and look to attend as many sporting events as possible.

Smith is most excited about adding to the risk and reliability offerings at MIT at a time when the subject has become especially important for nuclear power. “I’m grateful for the opportunity to bring my knowledge and expertise from the last 30 years to the field,” he says. Being a professor of the practice of NSE carries with it a responsibility to unite theory and practice, something Smith is especially good at. “We always have to answer the question of, ‘How do I take the research and make that practical,’ especially for something important like nuclear power, because we need much more of these ideas in industry,” he says.

He is particularly excited about developing the next generation of nuclear scientists. “Having the ability to do this at a place like MIT is especially fulfilling and something I have been desiring my whole career,” Smith says.


Cancer biologists discover a new mechanism for an old drug

Study reveals the drug, 5-fluorouracil, acts differently in different types of cancer — a finding that could help researchers design better drug combinations.


Since the 1950s, a chemotherapy drug known as 5-fluorouracil has been used to treat many types of cancer, including blood cancers and cancers of the digestive tract.

Doctors have long believed that this drug works by damaging the building blocks of DNA. However, a new study from MIT has found that in cancers of the colon and other gastrointestinal cancers, it actually kills cells by interfering with RNA synthesis.

The findings could have a significant effect on how doctors treat many cancer patients. Usually, 5-fluorouracil is given in combination with chemotherapy drugs that damage DNA, but the new study found that for colon cancer, this combination does not achieve the synergistic effects that were hoped for. Instead, combining 5-FU with drugs that affect RNA synthesis could make it more effective in patients with GI cancers, the researchers say.

“Our work is the most definitive study to date showing that RNA incorporation of the drug, leading to an RNA damage response, is responsible for how the drug works in GI cancers,” says Michael Yaffe, a David H. Koch Professor of Science at MIT, the director of the MIT Center for Precision Cancer Medicine, and a member of MIT’s Koch Institute for Integrative Cancer Research. “Textbooks implicate the DNA effects of the drug as the mechanism in all cancer types, but our data shows that RNA damage is what’s really important for the types of tumors, like GI cancers, where the drug is used clinically.”

Yaffe, the senior author of the new study, hopes to plan clinical trials of 5-fluorouracil with drugs that would enhance its RNA-damaging effects and kill cancer cells more effectively.

Jung-Kuei Chen, a Koch Institute research scientist, and Karl Merrick, a former MIT postdoc, are the lead authors of the paper, which appears today in Cell Reports Medicine.

An unexpected mechanism

Clinicians use 5-fluorouracil (5-FU) as a first-line drug for colon, rectal, and pancreatic cancers. It’s usually given in combination with oxaliplatin or irinotecan, which damage DNA in cancer cells. The combination was thought to be effective because 5-FU can disrupt the synthesis of DNA nucleotides. Without those building blocks, cells with damaged DNA wouldn’t be able to efficiently repair the damage and would undergo cell death.

Yaffe’s lab, which studies cell signaling pathways, wanted to further explore the underlying mechanisms of how these drug combinations preferentially kill cancer cells.

The researchers began by testing 5-FU in combination with oxaliplatin or irinotecan in colon cancer cells grown in the lab. To their surprise, they found that not only were the drugs not synergistic, in many cases they were less effective at killing cancer cells than what one would expect by simply adding together the effects of 5-FU or the DNA-damaging drug given alone.

“One would have expected that these combinations to cause synergistic cancer cell death because you are targeting two different aspects of a shared process: breaking DNA, and making nucleotides,” Yaffe says. “Karl looked at a dozen colon cancer cell lines, and not only were the drugs not synergistic, in most cases they were antagonistic. One drug seemed to be undoing what the other drug was doing.”

Yaffe’s lab then teamed up with Adam Palmer, an assistant professor of pharmacology at the University of North Carolina School of Medicine, who specializes in analyzing data from clinical trials. Palmer’s research group examined data from colon cancer patients who had been on one or more of these drugs and showed that the drugs did not show synergistic effects on survival in most patients.

“This confirmed that when you give these combinations to people, it’s not generally true that the drugs are actually working together in a beneficial way within an individual patient,” Yaffe says. “Instead, it appears that one drug in the combination works well for some patients while another drug in the combination works well in other patients. We just cannot yet predict which drug by itself is best for which patient, so everyone gets the combination.”

These results led the researchers to wonder just how 5-FU was working, if not by disrupting DNA repair. Studies in yeast and mammalian cells had shown that the drug also gets incorporated into RNA nucleotides, but there has been dispute over how much this RNA damage contributes to the drug’s toxic effects on cancer cells.

Inside cells, 5-FU is broken down into two different metabolites. One of these gets incorporated into DNA nucleotides, and other into RNA nucleotides. In studies of colon cancer cells, the researchers found that the metabolite that interferes with RNA was much more effective at killing colon cancer cells than the one that disrupts DNA.

That RNA damage appears to primarily affect ribosomal RNA, a molecule that forms part of the ribosome — a cell organelle responsible for assembling new proteins. If cells can’t form new ribosomes, they can’t produce enough proteins to function. Additionally, the lack of undamaged ribosomal RNA causes cells to destroy a large set of proteins that normally bind up the RNA to make new functional ribosomes.

The researchers are now exploring how this ribosomal RNA damage leads cells to under programmed cell death, or apoptosis. They hypothesize that sensing of the damaged RNAs within cell structures called lysosomes somehow triggers an apoptotic signal.

“My lab is very interested in trying to understand the signaling events during disruption of ribosome biogenesis, particularly in GI cancers and even some ovarian cancers, that cause the cells to die. Somehow, they must be monitoring the quality control of new ribosome synthesis, which somehow is connected to the death pathway machinery,” Yaffe says.

New combinations

The findings suggest that drugs that stimulate ribosome production could work together with 5-FU to make a highly synergistic combination. In their study, the researchers showed that a molecule that inhibits KDM2A, a suppressor of ribosome production, helped to boost the rate of cell death in colon cancer cells treated with 5-FU.

The findings also suggest a possible explanation for why combining 5-FU with a DNA-damaging drug often makes both drugs less effective. Some DNA damaging drugs send a signal to the cell to stop making new ribosomes, which would negate 5-FU’s effect on RNA. A better approach may be to give each drug a few days apart, which would give patients the potential benefits of each drug, without having them cancel each other out.

“Importantly, our data doesn’t say that these combination therapies are wrong. We know they’re effective clinically. It just says that if you adjust how you give these drugs, you could potentially make those therapies even better, with relatively minor changes in the timing of when the drugs are given,” Yaffe says.

He is now hoping to work with collaborators at other institutions to run a phase 2 or 3 clinical trial in which patients receive the drugs on an altered schedule.

“A trial is clearly needed to look for efficacy, but it should be straightforward to initiate because these are already clinically accepted drugs that form the standard of care for GI cancers. All we’re doing is changing the timing with which we give them,” he says.

The researchers also hope that their work could lead to the identification of biomarkers that predict which patients’ tumors will be more susceptible to drug combinations that include 5-FU. One such biomarker could be RNA polymerase I, which is active when cells are producing a lot of ribosomal RNA.

The research was funded by the Damon Runyon Cancer Research Foundation, a fellowship from the Ludwig Center at MIT, the National Institutes of Health, the Ovarian Cancer Research Fund, the Charles and Marjorie Holloway Foundation, and the STARR Cancer Consortium.


Victor Ambros ’75, PhD ’79 and Gary Ruvkun share Nobel Prize in Physiology or Medicine

The scientists, who worked together as postdocs at MIT, are honored for their discovery of microRNA — a class of molecules that are critical for gene regulation.


MIT alumnus Victor Ambros ’75, PhD ’79 and Gary Ruvkun, who did his postdoctoral training at MIT, will share the 2024 Nobel Prize in Physiology or Medicine, the Royal Swedish Academy of Sciences announced this morning in Stockholm.

Ambros, a professor at the University of Massachusetts Chan Medical School, and Ruvkun, a professor at Harvard Medical School and Massachusetts General Hospital, were honored for their discovery of microRNA, a class of tiny RNA molecules that play a critical role in gene control.

“Their groundbreaking discovery revealed a completely new principle of gene regulation that turned out to be essential for multicellular organisms, including humans. It is now known that the human genome codes for over one thousand microRNAs. Their surprising discovery revealed an entirely new dimension to gene regulation. MicroRNAs are proving to be fundamentally important for how organisms develop and function,” the Nobel committee said in its announcement today.

During the late 1980s, Ambros and Ruvkun both worked as postdocs in the laboratory of H. Robert Horvitz, a David H. Koch Professor at MIT, who was awarded the Nobel Prize in 2002.

While in Horvitz’s lab, the pair began studying gene control in the roundworm C. elegans — an effort that laid the groundwork for their Nobel discoveries. They studied two mutant forms of the worm, known as lin-4 and lin-14, that showed defects in the timing of the activation of genetic programs that control development.

In the early 1990s, while Ambros was a faculty member at Harvard University, he made a surprising discovery. The lin-4 gene, instead of encoding a protein, produced a very short RNA molecule that appeared to inhibit the expression of lin-14.

At the same time, Ruvkun was continuing to study these C. elegans genes in his lab at MGH and Harvard. He showed that lin-4 did not inhibit lin-14 by preventing the lin-14 gene from being transcribed into messenger RNA; instead, it appeared to turn off the gene’s expression later on, by preventing production of the protein encoded by lin-14.

The two compared results and realized that the sequence of lin-4 was complementary to some short sequences of lin-14. Lin-4, they showed, was binding to messenger RNA encoding lin-14 and blocking it from being translated into protein — a mechanism for gene control that had never been seen before. Those results were published in two articles in the journal Cell in 1993.

In an interview with the Journal of Cell Biology, Ambros credited the contributions of his collaborators, including his wife, Rosalind “Candy” Lee ’76, and postdoc Rhonda Feinbaum, who both worked in his lab, cloned and characterized the lin-4 microRNA, and were co-authors on one of the 1993 Cell papers.

In 2000, Ruvkun published the discovery of another microRNA molecule, encoded by a gene called let-7, which is found throughout the animal kingdom. Since then, more than 1,000 microRNA genes have been found in humans.

“Ambros and Ruvkun’s seminal discovery in the small worm C. elegans was unexpected, and revealed a new dimension to gene regulation, essential for all complex life forms,” the Nobel citation declared.

Ambros, who was born in New Hampshire and grew up in Vermont, earned his PhD at MIT under the supervision of David Baltimore, then an MIT professor of biology, who received a Nobel Prize in 1973. Ambros was a longtime faculty member at Dartmouth College before joining the faculty at the University of Massachusetts Chan Medical School in 2008.

Ruvkun is a graduate of the University of California at Berkeley and earned his PhD at Harvard University before joining Horvitz’s lab at MIT.


On technology in schools, think evolution, not revolution

Associate Professor Justin Reich’s work shows high-tech tools infuse into education one step at a time, as schools keep adapting and changing.


Back in 1913 Thomas Edison confidently proclaimed, “Books will soon be obsolete in the public schools.” At the time, Edison was advocating for motion pictures as an educational device. “Our school system will be completely changed inside of 10 years,” he added.

Edison was not wrong that video recordings could help people learn. On the other hand, students still read books today. Like others before and after him, Edison thought one particular technology was going to completely revolutionize education. In fact, technologies do get adopted into schools, but usually quite gradually and without altering the fundamentals of education: a good classroom with good teachers and a community of willing students.

The idea that technology changes education incrementally is central to Justin Reich’s work. Reich is an associate professor in MIT’s Comparative Media Studies/Writing program who has been studying schools for a couple of decades, as a teacher, consultant, and scholar. Reich is an advocate for technology, but with a realistic perspective.

Time after time, entrepreneurs claim tech will upend what they depict as stagnation in schools. Both parts of those claims usually miss the mark: Tech tools produce not revolution but evolution, in schools that are frequently changing anyway. Reich’s work emphasizes this alternate framework.

“In the history of education technology, the two most common findings are, first, when teachers get new technology, they use it to do what they were already doing,” Reich says. “It takes quite a bit of time, practice, coaching, messing up, trying again, and iteration, to have new technologies lead to new and better practices.”

The second finding, meanwhile, is that ed-tech tools are most readily adopted by the well-off.

“Almost every educational technology we’ve ever developed disproportionately benefits the affluent,” Reich says. “Even when we make things available for free, people with more financial, social, and technical capital are more likely to take advantage of innovations. Those are two findings from the research literature that people don’t want to hear.”

Some people must want to hear them: Reich has written two well-regarded books about education, and for his scholarship and teaching was awarded tenure earlier this year at MIT, where he founded the Teaching Systems Lab.

“I’ve spent a substantial portion of my career reminding people of those two things, and demonstrating them again and again,” Reich says. 

Optimized like a shark

Long before he made a living by studying schools, Reich pictured himself working in them. Indeed, that was his career plan.

“I wanted to be a teacher,” Reich says. He received his undergraduate degree from the University of Virginia in interdisciplinary studies, then earned an MA in history from the University of Virginia, writing a thesis about the U.S. National Parks system.

Reich then got a job in the early 2000s as a history teacher at a private school in the Boston area. Soon the school administrators gave Reich a cart of laptops and encouraged him to put the new tools to use. Many history archives were becoming digitized, so Reich happily integrated the laptops and web-based sources into his lessons.

Before long Reich co-founded EdTechTeacher, a consulting firm helping schools use technology productively. And his own teaching reinforced a lesson: When larger practices in a discipline change, schools can use technology to follow suit; it will make less difference otherwise. Then too, schools also adapt and evolve in ways unrelated to technology. For instance, we now educate a greater breadth of people than ever.

“You can absolutely improve schools,” Reich says. “And we improve schools all the time. It’s just a long, slow process, and everything is kind of incremental.”

Eventually Reich went back to school himself, earning his PhD from Harvard University’s Graduate School of Education in 2012. At the time, large-scale online college courses were seen as a potentially disruptive force in higher education. But that proposed revolution became an evolution, with online learning producing uneven results for K-12 students and undergraduates while being used more effectively in some graduate programs. Reich examines the subject in his 2020 book, “Failure to Disrupt,” about technologies intended to enhance education at scale.

“Online learning is good for people who are already well-equipped for learning, and those tend to be well-off, educated people,” Reich says. The Covid-19 pandemic also helped reinforce the value of in-person learning. The physical classroom may date to ancient times, but it is a durable innovation.

“Technology gets introduced into educational systems, when it’s possible that the systems are already pretty optimized for what they want to do,” Reich says. Citing another scholar of education, he notes, “Mike Caufield says, ‘We think of schools as old and ancient, but maybe they are in the way a great white shark is, optimized for its environment.’”

Okay, but what about AI?

Reich has now seen many supposed ed-tech revolutions firsthand and studied many others from the past. The latest such potential revolution, of course, is artificial intelligence, currently subject of massive investments and attention. Will AI be different, and fundamentally transform the way we learn? Reich and a colleague, Jesse Dukes, are conducting a research project finding out how schools are currently using AI. So far, Reich thinks, the impact is not huge.

“A lot of folks are saying, ‘AI is going to be amazing! It’s going to transform everything!’” Reich says. “And we’re spending a lot of time with teachers and students asking what they’re actually doing. And of course AI is not transformative. Teachers are finding modest ways to integrate it into their practice, but the main function of AI in schools is kids using it to do their homework, which is probably not good for learning, on net.”

To some degree, Reich suspects, teachers are now devoting more time to in-class writing assignments, to work around students substituting Chat GPT for their own writing. As he notes, “Using in-class time differently to accommodate for changes in technology is something educators have gotten really good at doing over the last decade. This doesn’t seem like a tidal wave crashing over them.”

Reich, again, is not an opponent of technology, but a realist about it, including AI. “A lot of new things are probably helpful in some way, some place, so let’s find it,” he says. In the meantime, schools will be grappling with a lot of hard problems that tech alone will not solve.

“If you’re working at a school serving kids furthest from opportunity in the country, the biggest problem you’re facing right now is chronic absenteeism,” Reich says. “You’re having a really hard time getting kids to show up. AI doesn’t really have anything to do with that.”

Overall, Reich thinks, the key in sustaining good schools is to keep tinkering on many fronts. Educators should “act in short design spirals,” as he wrote in his 2023 book, “Iterate: The Secret to Innovation in Schools,” rather than waiting for radical technology solutions. In education, the tortoise will usually beat the disruptor.

“Improving education is a lot of hard work, and it’s a long process, but at the other end of it, you can get genuine improvement,” Reich concludes.


Modeling relationships to solve complex problems efficiently

Associate Professor Julian Shun develops high-performance algorithms and frameworks for large-scale graph processing.


The German philosopher Fredrich Nietzsche once said that “invisible threads are the strongest ties.” One could think of “invisible threads” as tying together related objects, like the homes on a delivery driver’s route, or more nebulous entities, such as transactions in a financial network or users in a social network.

Computer scientist Julian Shun studies these types of multifaceted but often invisible connections using graphs, where objects are represented as points, or vertices, and relationships between them are modeled by line segments, or edges.

Shun, a newly tenured associate professor in the Department of Electrical Engineering and Computer Science, designs graph algorithms that could be used to find the shortest path between homes on the delivery driver’s route or detect fraudulent transactions made by malicious actors in a financial network.

But with the increasing volume of data, such networks have grown to include billions or even trillions of objects and connections. To find efficient solutions, Shun builds high-performance algorithms that leverage parallel computing to rapidly analyze even the most enormous graphs. As parallel programming is notoriously difficult, he also develops user-friendly programming frameworks that make it easier for others to write efficient graph algorithms of their own.

“If you are searching for something in a search engine or social network, you want to get your results very quickly. If you are trying to identify fraudulent financial transactions at a bank, you want to do so in real-time to minimize damages. Parallel algorithms can speed things up by using more computing resources,” explains Shun, who is also a principal investigator in the Computer Science and Artificial Intelligence Laboratory (CSAIL).

Such algorithms are frequently used in online recommendation systems. Search for a product on an e-commerce website and odds are you’ll quickly see a list of related items you could also add to your cart. That list is generated with the help of graph algorithms that leverage parallelism to rapidly find related items across a massive network of users and available products.

Campus connections

As a teenager, Shun’s only experience with computers was a high school class on building websites. More interested in math and the natural sciences than technology, he intended to major in one of those subjects when he enrolled as an undergraduate at the University of California at Berkeley.

But during his first year, a friend recommended he take an introduction to computer science class. While he wasn’t sure what to expect, he decided to sign up.

“I fell in love with programming and designing algorithms. I switched to computer science and never looked back,” he recalls.

That initial computer science course was self-paced, so Shun taught himself most of the material. He enjoyed the logical aspects of developing algorithms and the short feedback loop of computer science problems. Shun could input his solutions into the computer and immediately see whether he was right or wrong. And the errors in the wrong solutions would guide him toward the right answer.

“I’ve always thought that it was fun to build things, and in programming, you are building solutions that do something useful. That appealed to me,” he adds.

After graduation, Shun spent some time in industry but soon realized he wanted to pursue an academic career. At a university, he knew he would have the freedom to study problems that interested him.

Getting into graphs

He enrolled as a graduate student at Carnegie Mellon University, where he focused his research on applied algorithms and parallel computing.

As an undergraduate, Shun had taken theoretical algorithms classes and practical programming courses, but the two worlds didn’t connect. He wanted to conduct research that combined theory and application. Parallel algorithms were the perfect fit.

“In parallel computing, you have to care about practical applications. The goal of parallel computing is to speed things up in real life, so if your algorithms aren’t fast in practice, then they aren’t that useful,” he says.

At Carnegie Mellon, he was introduced to graph datasets, where objects in a network are modeled as vertices connected by edges. He felt drawn to the many applications of these types of datasets, and the challenging problem of developing efficient algorithms to handle them.

After completing a postdoctoral fellowship at Berkeley, Shun sought a faculty position and decided to join MIT. He had been collaborating with several MIT faculty members on parallel computing research, and was excited to join an institute with such a breadth of expertise.

In one of his first projects after joining MIT, Shun joined forces with Department of Electrical Engineering and Computer Science professor and fellow CSAIL member Saman Amarasinghe, an expert on programming languages and compilers, to develop a programming framework for graph processing known as GraphIt. The easy-to-use framework, which generates efficient code from high-level specifications, performed about five times faster than the next best approach.

“That was a very fruitful collaboration. I couldn’t have created a solution that powerful if I had worked by myself,” he says.

Shun also expanded his research focus to include clustering algorithms, which seek to group related datapoints together. He and his students build parallel algorithms and frameworks for quickly solving complex clustering problems, which can be used for applications like anomaly detection and community detection.

Dynamic problems

Recently, he and his collaborators have been focusing on dynamic problems where data in a graph network change over time.

When a dataset has billions or trillions of data points, running an algorithm from scratch to make one small change could be extremely expensive from a computational point of view. He and his students design parallel algorithms that process many updates at the same time, improving efficiency while preserving accuracy.

But these dynamic problems also pose one of the biggest challenges Shun and his team must work to overcome. Because there aren’t many dynamic datasets available for testing algorithms, the team often must generate synthetic data which may not be realistic and could hamper the performance of their algorithms in the real world.

In the end, his goal is to develop dynamic graph algorithms that perform efficiently in practice while also holding up to theoretical guarantees. That ensures they will be applicable across a broad range of settings, he says.

Shun expects dynamic parallel algorithms to have an even greater research focus in the future. As datasets continue to become larger, more complex, and more rapidly changing, researchers will need to build more efficient algorithms to keep up.

He also expects new challenges to come from advancements in computing technology, since researchers will need to design new algorithms to leverage the properties of novel hardware.

“That’s the beauty of research — I get to try and solve problems other people haven’t solved before and contribute something useful to society,” he says.


Laura Lewis and Jing Kong receive postdoctoral mentoring award

Advisors commended for providing exceptional individualized mentoring for postdocs.


MIT professors Laura Lewis and Jing Kong have been recognized with the MIT Postdoctoral Association’s Award for Excellence in Postdoctoral Mentoring. The award is given annually to faculty or other principal investigators (PIs) whose current and former postdoctoral scholars say they stand out in their efforts to create a supportive work environment for postdocs and support postdocs’ professional development.

This year, the award identified exceptional mentors in two categories. Lewis, the Athinoula A. Martinos Associate Professor in the Institute for Mechanical Engineering and Science and the Department of Electrical Engineering and Computer Science (EECS), was recognized as an early-career mentor. Kong, the Jerry McAfee (1940) Professor In Engineering in the Research Laboratory of Electronics and EECS, was recognized as an established mentor.

“It’s a very diverse kind of mentoring that you need for a postdoc,” said Vipindev Adat Vasudevan, who chaired the Postdoctoral Association committee organizing the award. “Every postdoc has different requirements. Some of the people will be going to industry, some of the people are going for academia… so everyone comes with a different objective.”

Vasudevan presented the award at a luncheon hosted by the Office of the Vice President for Research on Sept. 25 in recognition of National Postdoc Appreciation Week. The annual luncheon, celebrating the postdoctoral community’s contributions to MIT, is attended by hundreds of postdocs and faculty.

“The award recognizes faculty members who go above and beyond to create a professional, supportive, and inclusive environment to foster postdocs’ growth and success,” said Ian Waitz, vice president for research, who spoke at the luncheon. He noted the vital role postdocs play in advancing MIT research, mentoring undergraduate and graduate students, and connecting with colleagues from around the globe, while working toward launching independent research careers of their own. 

“The best part of my job”

Nomination letters for Lewis spoke to her ability to create an inclusive and welcoming lab. In the words of one nominator, “She invests considerable time and effort in cultivating personalized mentoring relationships, ensuring each postdoc in her lab receives guidance and support tailored to their individual goals and circumstances.”

Other nominators commented on Lewis’ ability to facilitate collaborations that furthered postdocs’ research goals. Lewis encouraged them to work with other PIs to build their independence and professional development, and to develop their own research questions, they said. “I was never pushed to work on her projects — rather, she guided me towards finding and developing my own,” wrote one.

Lewis’ lab explores new ways to image the human brain, integrating engineering with neuroscience. Improving neuroimaging techniques can improve our understanding of the brain’s activity when asleep and awake, allowing researchers to understand sleep’s impact on brain health.

“I love working with my postdocs and trainees; it’s honestly the best part of my job,” Lewis says. “It’s important for any individual to be in an environment to help them grow toward what they want to do.”

Recognized as an early-career mentor, Lewis looks forward to seeing her postdocs’ career trajectories over time. Group members returning as collaborators come back with fresh ideas and creative approaches, she says, adding, “I view this mentoring relationship as lifelong.”

“No ego, no bias, just solid facts”

Kong’s nomination also speaks to the lifelong nature of the mentoring relationship. The 13 letters supporting Kong’s nomination came from past and current postdocs. Nearly all touched on Kong’s kindness and the culture of respect she maintains in the lab, alongside high expectations of scientific rigor.

“No ego, no bias, just solid facts and direct evidence,” wrote one nominator: “In discussions, she would ask you many questions that make you think ‘I should have asked that to myself’ or ‘why didn’t I think of this.’”

Kong was also praised for her ability to take the long view on projects and mentor postdocs through temporary challenges. One nominator wrote of a period when the results of a project were less promising than anticipated, saying, “Jing didn't push me to switch my direction; instead, she was always glad to listen and discuss the new results. Because of her encouragement and long-term support, I eventually got very good results on this project.”

Kong’s lab focuses on the chemical synthesis of nanomaterials, such as carbon nanotubes, with the goal of characterizing their structures and identifying applications. Kong says postdocs are instrumental in bringing new ideas into the lab.

“I learn a lot from each one of them. They always have a different perspective, and also, they each have their unique talents. So we learn from each other,” she says. As a mentor, she sees her role as developing postdocs’ individual talents, while encouraging them to collaborate with group members who have different strengths.

The collaborations that Kong facilitates extend beyond the postdocs’ time at MIT. She views the postdoctoral period as a key stage in developing a professional network: “Their networking starts from the first day they join the group. They already in this process establish connections with other group members, and also our collaborators, that will continue on for many years.”

About the award

The Award for Excellence in Postdoctoral Mentoring has been awarded since 2022. With support from Ann Skoczenski, director of Postdoctoral Services in the Office of the VPR, and the Faculty Postdoctoral Advisory Committee, nominations are reviewed on four criteria:

The Award for Excellence in Postdoctoral Mentoring provides a celebratory lunch for the recipient’s research group, as well as the opportunity to participate in a mentoring seminar or panel discussion for the postdoctoral community. Last year’s award was given to Jesse Kroll, the Peter de Florez Professor of Civil and Environmental Engineering, professor of chemical engineering, and director of the Ralph M. Parsons Laboratory.


MIT engineers create a chip-based tractor beam for biological particles

The tiny device uses a tightly focused beam of light to capture and manipulate cells.


MIT researchers have developed a miniature, chip-based “tractor beam,” like the one that captures the Millennium Falcon in the film “Star Wars,” that could someday help biologists and clinicians study DNA, classify cells, and investigate the mechanisms of disease.

Small enough to fit in the palm of your hand, the device uses a beam of light emitted by a silicon-photonics chip to manipulate particles millimeters away from the chip surface. The light can penetrate the glass cover slips that protect samples used in biological experiments, enabling cells to remain in a sterile environment.

Traditional optical tweezers, which trap and manipulate particles using light, usually require bulky microscope setups, but chip-based optical tweezers could offer a more compact, mass manufacturable, broadly accessible, and high-throughput solution for optical manipulation in biological experiments.

However, other similar integrated optical tweezers can only capture and manipulate cells that are very close to or directly on the chip surface. This contaminates the chip and can stress the cells, limiting compatibility with standard biological experiments.

Using a system called an integrated optical phased array, the MIT researchers have developed a new modality for integrated optical tweezers that enables trapping and tweezing of cells more than a hundred times further away from the chip surface.

“This work opens up new possibilities for chip-based optical tweezers by enabling trapping and tweezing of cells at much larger distances than previously demonstrated. It’s exciting to think about the different applications that could be enabled by this technology,” says Jelena Notaros, the Robert J. Shillman Career Development Professor in Electrical Engineering and Computer Science (EECS), and a member of the Research Laboratory of Electronics.

Joining Notaros on the paper are lead author and EECS graduate student Tal Sneh; Sabrina Corsetti, an EECS graduate student; Milica Notaros PhD ’23; Kruthika Kikkeri PhD ’24; and Joel Voldman, the William R. Brody Professor of EECS. The research appears today in Nature Communications.

A new trapping modality

Optical traps and tweezers use a focused beam of light to capture and manipulate tiny particles. The forces exerted by the beam will pull microparticles toward the intensely focused light in the center, capturing them. By steering the beam of light, researchers can pull the microparticles along with it, enabling them to manipulate tiny objects using noncontact forces.

However, optical tweezers traditionally require a large microscope setup in a lab, as well as multiple devices to form and control light, which limits where and how they can be utilized.

“With silicon photonics, we can take this large, typically lab-scale system and integrate it onto a chip. This presents a great solution for biologists, since it provides them with optical trapping and tweezing functionality without the overhead of a complicated bulk-optical setup,” Notaros says.

But so far, chip-based optical tweezers have only been capable of emitting light very close to the chip surface, so these prior devices could only capture particles a few microns off the chip surface. Biological specimens are typically held in sterile environments using glass cover slips that are about 150 microns thick, so the only way to manipulate them with such a chip is to take the cells out and place them on its surface.

However, that leads to chip contamination. Every time a new experiment is done, the chip has to be thrown away and the cells need to be put onto a new chip.

To overcome these challenges, the MIT researchers developed a silicon photonics chip that emits a beam of light that focuses about 5 millimeters above its surface. This way, they can capture and manipulate biological particles that remain inside a sterile cover slip, protecting both the chip and particles from contamination.

Manipulating light

The researchers accomplish this using a system called an integrated optical phased array. This technology involves a series of microscale antennas fabricated on a chip using semiconductor manufacturing processes. By electronically controlling the optical signal emitted by each antenna, researchers can shape and steer the beam of light emitted by the chip.

Motivated by long-range applications like lidar, most prior integrated optical phased arrays weren’t designed to generate the tightly focused beams needed for optical tweezing. The MIT team discovered that, by creating specific phase patterns for each antenna, they could form an intensely focused beam of light, which can be used for optical trapping and tweezing millimeters from the chip’s surface.

“No one had created silicon-photonics-based optical tweezers capable of trapping microparticles over a millimeter-scale distance before. This is an improvement of several orders of magnitude higher compared to prior demonstrations,” says Notaros.

By varying the wavelength of the optical signal that powers the chip, the researchers could steer the focused beam over a range larger than a millimeter and with microscale accuracy.

To test their device, the researchers started by trying to capture and manipulate tiny polystyrene spheres. Once they succeeded, they moved on to trapping and tweezing cancer cells provided by the Voldman group.

“There were many unique challenges that came up in the process of applying silicon photonics to biophysics,” Sneh adds.

The researchers had to determine how to track the motion of sample particles in a semiautomated fashion, ascertain the proper trap strength to hold the particles in place, and effectively postprocess data, for instance.

In the end, they were able to show the first cell experiments with single-beam optical tweezers.

Building off these results, the team hopes to refine the system to enable an adjustable focal height for the beam of light. They also want to apply the device to different biological systems and use multiple trap sites at the same time to manipulate biological particles in more complex ways.

“This is a very creative and important paper in many ways,” says Ben Miller, Dean’s Professor of Dermatology and professor of biochemistry and biophysics at the University of Rochester, who was not involved with this work. “For one, given that silicon photonic chips can be made at low cost, it potentially democratizes optical tweezing experiments. That may sound like something that only would be of interest to a few scientists, but in reality having these systems widely available will allow us to study fundamental problems in single-cell biophysics in ways previously only available to a few labs given the high cost and complexity of the instrumentation. I can also imagine many applications where one of these devices (or possibly an array of them) could be used to improve the sensitivity of disease diagnostic.”

This research is funded by the National Science Foundation (NSF), an MIT Frederick and Barbara Cronin Fellowship, and the MIT Rolf G. Locher Endowed Fellowship.


Celebrating the people behind Kendall Square’s innovation ecosystem

The 16th Annual Meeting of the Kendall Square Association honored community members for their work bringing impactful innovations to bear on humanity’s biggest challenges.


While it’s easy to be amazed by the constant drumbeat of innovations coming from Kendall Square in Cambridge, Massachusetts, sometimes overlooked are the dedicated individuals working to make those scientific and technological breakthroughs a reality. Every day, people in the neighborhood tackle previously intractable problems and push the frontiers of their fields.

This year’s Kendall Square Association (KSA) Annual Meeting centered around celebrating the people behind the area’s prolific innovation ecosystem. That included a new slate of awards and recognitions for community members and a panel discussion featuring MIT President Sally Kornbluth.

“It’s truly inspiring to be surrounded by all of you: people who seem to share an exuberant curiosity, a pervasive ethic of service, and the baseline expectation that we’re all interested in impact — in making a difference for people and the planet,” Kornbluth said.

The gathering took place in MIT’s Walker Memorial (Building 50) on Memorial Drive and attracted entrepreneurs, life science workers, local students, restaurant and retail shop owners, and leaders of nonprofits.

The KSA itself is a nonprofit organization made up of over 150 organizations across the greater Kendall Square region, from large companies to universities like MIT and Harvard, along with the independent shops and restaurants that give Kendall Square its distinct character.

New to this year’s event were two Founder Awards, which were given to Sangeeta Bhatia, the the John and Dorothy Wilson Professor of Health Sciences and Technology and of Electrical Engineering and Computer Science at MIT, and Michal Preminger, head of Johnson and Johnson Innovation, for their work bringing people together to achieve hard things that benefit humanity.

The KSA will donate $2,500 to the Science Club for Girls in Bhatia’s honor and $2,500 to Innovators for Purpose in honor of Preminger.

Recognition was also given to Alex Cheung of the Cambridge Innovation Center and Shazia Mir of LabCentral for their work bringing Kendall Square’s community members together.

Cambridge Mayor Denise Simmons also spoke at the event, noting the vital role the Kendall Square community has played in things like Covid-19 vaccine development and in the fight against climate change.

“As many of you know, Cambridge has a long and proud history of innovation, with the presence of MIT and the remarkable growth of the tech and life science industry examples of that,” Simmons said. “We are leaving a lasting, positive impact in our city. This community has made and continues to make enormous contributions, not just to our city but to the world.”

In her talk, Kornbluth also introduced the Kendall Square community to her plans for The Climate Project at MIT, which is designed to focus the Institute’s talent and resources to achieve real-world impact on climate change faster. The project will provide funding and catalyze partnerships around six climate “missions,” or broad areas where MIT researchers will seek to identify gaps in the global climate response that MIT can help fill.

“The Climate Project is a whole-of-MIT mobilization that’s mission driven, solution focused, and outward looking,” Kornbluth explained. “If you want to make progress, faster and at scale, that’s the way!”

After mingling with Kendall community members, Kornbluth said she still considers herself a newbie to the area but is coming to see the success of Kendall Square and MIT as more than a coincidence.

“The more time I spend here, the more I come to understand the incredible synergies between MIT and Kendall Square,” Kornbluth said. “We know, for example, that proximity is an essential ingredient in our collective and distinctive recipe for impact. That proximity, and the cross-fertilization that comes with it, helps us churn out new technologies and patents, found startups, and course-correct our work as we try to keep pace with the world’s challenges. We can’t do any of this separately. Our work together — all of us in this thriving, wildly entrepreneurial community — is what drives the success of our innovation ecosystem.”


Translating MIT research into real-world results

MIT’s innovation and entrepreneurship system helps launch water, food, and ag startups with social and economic benefits.


Inventive solutions to some of the world’s most critical problems are being discovered in labs, classrooms, and centers across MIT every day. Many of these solutions move from the lab to the commercial world with the help of over 85 Institute resources that comprise MIT’s robust innovation and entrepreneurship (I&E) ecosystem. The Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) draws on MIT’s wealth of I&E knowledge and experience to help researchers commercialize their breakthrough technologies through the J-WAFS Solutions grant program. By collaborating with I&E programs on campus, J-WAFS prepares MIT researchers for the commercial world, where their novel innovations aim to improve productivity, accessibility, and sustainability of water and food systems, creating economic, environmental, and societal benefits along the way.

The J-WAFS Solutions program launched in 2015 with support from Community Jameel, an international organization that advances science and learning for communities to thrive. Since 2015, J-WAFS Solutions has supported 19 projects with one-year grants of up to $150,000, with some projects receiving renewal grants for a second year of support. Solutions projects all address challenges related to water or food. Modeled after the esteemed grant program of MIT’s Deshpande Center for Technological Innovation, and initially administered by Deshpande Center staff, the J-WAFS Solutions program follows a similar approach by supporting projects that have already completed the basic research and proof-of-concept phases. With technologies that are one to three years away from commercialization, grantees work on identifying their potential markets and learn to focus on how their technology can meet the needs of future customers.

“Ingenuity thrives at MIT, driving inventions that can be translated into real-world applications for widespread adoption, implantation, and use,” says J-WAFS Director Professor John H. Lienhard V. “But successful commercialization of MIT technology requires engineers to focus on many challenges beyond making the technology work. MIT’s I&E network offers a variety of programs that help researchers develop technology readiness, investigate markets, conduct customer discovery, and initiate product design and development,” Lienhard adds. “With this strong I&E framework, many J-WAFS Solutions teams have established startup companies by the completion of the grant. J-WAFS-supported technologies have had powerful, positive effects on human welfare. Together, the J-WAFS Solutions program and MIT’s I&E ecosystem demonstrate how academic research can evolve into business innovations that make a better world,” Lienhard says.

Creating I&E collaborations

In addition to support for furthering research, J-WAFS Solutions grants allow faculty, students, postdocs, and research staff to learn the fundamentals of how to transform their work into commercial products and companies. As part of the grant requirements, researchers must interact with mentors through MIT Venture Mentoring Service (VMS). VMS connects MIT entrepreneurs with teams of carefully selected professionals who provide free and confidential mentorship, guidance, and other services to help advance ideas into for-profit, for-benefit, or nonprofit ventures. Since 2000, VMS has mentored over 4,600 MIT entrepreneurs across all industries, through a dynamic and accomplished group of nearly 200 mentors who volunteer their time so that others may succeed. The mentors provide impartial and unbiased advice to members of the MIT community, including MIT alumni in the Boston area. J-WAFS Solutions teams have been guided by 21 mentors from numerous companies and nonprofits. Mentors often attend project events and progress meetings throughout the grant period.

“Working with VMS has provided me and my organization with a valuable sounding board for a range of topics, big and small,” says Eric Verploegen PhD ’08, former research engineer in the MIT D-Lab and founder of J-WAFS spinout CoolVeg. Along with professors Leon Glicksman and Daniel Frey, Verploegen received a J-WAFS Solutions grant in 2021 to commercialize cold-storage chambers that use evaporative cooling to help farmers preserve fruits and vegetables in rural off-grid communities. Verploegen started CoolVeg in 2022 to increase access and adoption of open-source, evaporative cooling technologies through collaborations with businesses, research institutions, nongovernmental organizations, and government agencies. “Working as a solo founder at my nonprofit venture, it is always great to have avenues to get feedback on communications approaches, overall strategy, and operational issues that my mentors have experience with,” Verploegen says. Three years after the initial Solutions grant, one of the VMS mentors assigned to the evaporative cooling team still acts as a mentor to Verploegen today.

Another Solutions grant requirement is for teams to participate in the Spark program — a free, three-week course that provides an entry point for researchers to explore the potential value of their innovation. Spark is part of the National Science Foundation’s (NSF) Innovation Corps (I-Corps), which is an “immersive, entrepreneurial training program that facilitates the transformation of invention to impact.” In 2018, MIT received an award from the NSF, establishing the New England Regional Innovation Corps Node (NE I-Corps) to deliver I-Corps training to participants across New England. Trainings are open to researchers, engineers, scientists, and others who want to engage in a customer discovery process for their technology. Offered regularly throughout the year, the Spark course helps participants identify markets and explore customer needs in order to understand how their technologies can be positioned competitively in their target markets. They learn to assess barriers to adoption, as well as potential regulatory issues or other challenges to commercialization. NE-I-Corps reports that since its start, over 1,200 researchers from MIT have completed the program and have gone on to launch 175 ventures, raising over $3.3 billion in funding from grants and investors, and creating over 1,800 jobs.

Constantinos Katsimpouras, a research scientist in the Department of Chemical Engineering, went through the NE I-Corps Spark program to better understand the customer base for a technology he developed with professors Gregory Stephanopoulos and Anthony Sinskey. The group received a J-WAFS Solutions grant in 2021 for their microbial platform that converts food waste from the dairy industry into valuable products. “As a scientist with no prior experience in entrepreneurship, the program introduced me to important concepts and tools for conducting customer interviews and adopting a new mindset,” notes Katsimpouras. “Most importantly, it encouraged me to get out of the building and engage in interviews with potential customers and stakeholders, providing me with invaluable insights and a deeper understanding of my industry,” he adds. These interviews also helped connect the team with companies willing to provide resources to test and improve their technology — a critical step to the scale-up of any lab invention.

In the case of Professor Cem Tasan’s research group in the Department of Materials Science and Engineering, the I-Corps program led them to the J-WAFS Solutions grant, instead of the other way around. Tasan is currently working with postdoc Onur Guvenc on a J-WAFS Solutions project to manufacture formable sheet metal by consolidating steel scrap without melting, thereby reducing water use compared to traditional steel processing. Before applying for the Solutions grant, Guvenc took part in NE I-Corps. Like Katsimpouras, Guvenc benefited from the interaction with industry. “This program required me to step out of the lab and engage with potential customers, allowing me to learn about their immediate challenges and test my initial assumptions about the market,” Guvenc recalls. “My interviews with industry professionals also made me aware of the connection between water consumption and steelmaking processes, which ultimately led to the J-WAFS 2023 Solutions Grant,” says Guvenc.

After completing the Spark program, participants may be eligible to apply for the Fusion program, which provides microgrants of up to $1,500 to conduct further customer discovery. The Fusion program is self-paced, requiring teams to conduct 12 additional customer interviews and craft a final presentation summarizing their key learnings. Professor Patrick Doyle’s J-WAFS Solutions team completed the Spark and Fusion programs at MIT. Most recently, their team was accepted to join the NSF I-Corps National program with a $50,000 award. The intensive program requires teams to complete an additional 100 customer discovery interviews over seven weeks. Located in the Department of Chemical Engineering, the Doyle lab is working on a sustainable microparticle hydrogel system to rapidly remove micropollutants from water. The team’s focus has expanded to higher value purifications in amino acid and biopharmaceutical manufacturing applications. Devashish Gokhale PhD ’24 worked with Doyle on much of the underlying science.

“Our platform technology could potentially be used for selective separations in very diverse market segments, ranging from individual consumers to large industries and government bodies with varied use-cases,” Gokhale explains. He goes on to say, “The I-Corps Spark program added significant value by providing me with an effective framework to approach this problem ... I was assigned a mentor who provided critical feedback, teaching me how to formulate effective questions and identify promising opportunities.” Gokhale says that by the end of Spark, the team was able to identify the best target markets for their products. He also says that the program provided valuable seminars on topics like intellectual property, which was helpful in subsequent discussions the team had with MIT’s Technology Licensing Office.

Another member of Doyle’s team, Arjav Shah, a recent PhD from MIT’s Department of Chemical Engineering and a current MBA candidate at the MIT Sloan School of Management, is spearheading the team’s commercialization plans. Shah attended Fusion last fall and hopes to lead efforts to incorporate a startup company called hydroGel.  “I admire the hypothesis-driven approach of the I-Corps program,” says Shah. “It has enabled us to identify our customers’ biggest pain points, which will hopefully lead us to finding a product-market fit.” He adds “based on our learnings from the program, we have been able to pivot to impact-driven, higher-value applications in the food processing and biopharmaceutical industries.” Postdoc Luca Mazzaferro will lead the technical team at hydroGel alongside Shah.

In a different project, Qinmin Zheng, a postdoc in the Department of Civil and Environmental Engineering, is working with Professor Andrew Whittle and Lecturer Fábio Duarte. Zheng plans to take the Fusion course this fall to advance their J-WAFS Solutions project that aims to commercialize a novel sensor to quantify the relative abundance of major algal species and provide early detection of harmful algal blooms. After completing Spark, Zheng says he’s “excited to participate in the Fusion program, and potentially the National I-Corps program, to further explore market opportunities and minimize risks in our future product development.”

Economic and societal benefits

Commercializing technologies developed at MIT is one of the ways J-WAFS helps ensure that MIT research advances will have real-world impacts in water and food systems. Since its inception, the J-WAFS Solutions program has awarded 28 grants (including renewals), which have supported 19 projects that address a wide range of global water and food challenges. The program has distributed over $4 million to 24 professors, 11 research staff, 15 postdocs, and 30 students across MIT. Nearly half of all J-WAFS Solutions projects have resulted in spinout companies or commercialized products, including eight companies to date plus two open-source technologies.

Nona Technologies is an example of a J-WAFS spinout that is helping the world by developing new approaches to produce freshwater for drinking. Desalination — the process of removing salts from seawater — typically requires a large-scale technology called reverse osmosis. But Nona created a desalination device that can work in remote off-grid locations. By separating salt and bacteria from water using electric current through a process called ion concentration polarization (ICP), their technology also reduces overall energy consumption. The novel method was developed by Jongyoon Han, professor of electrical engineering and biological engineering, and research scientist Junghyo Yoon. Along with Bruce Crawford, a Sloan MBA alum, Han and Yoon created Nona Technologies to bring their lightweight, energy-efficient desalination technology to the market.

“My feeling early on was that once you have technology, commercialization will take care of itself,” admits Crawford. The team completed both the Spark and Fusion programs and quickly realized that much more work would be required. “Even in our first 24 interviews, we learned that the two first markets we envisioned would not be viable in the near term, and we also got our first hints at the beachhead we ultimately selected,” says Crawford. Nona Technologies has since won MIT’s $100K Entrepreneurship Competition, received media attention from outlets like Newsweek and Fortune, and hired a team that continues to further the technology for deployment in resource-limited areas where clean drinking water may be scarce. 

Food-borne diseases sicken millions of people worldwide each year, but J-WAFS researchers are addressing this issue by integrating molecular engineering, nanotechnology, and artificial intelligence to revolutionize food pathogen testing. Professors Tim Swager and Alexander Klibanov, of the Department of Chemistry, were awarded one of the first J-WAFS Solutions grants for their sensor that targets food safety pathogens. The sensor uses specialized droplets that behave like a dynamic lens, changing in the presence of target bacteria in order to detect dangerous bacterial contamination in food. In 2018, Swager launched Xibus Systems Inc. to bring the sensor to market and advance food safety for greater public health, sustainability, and economic security.

“Our involvement with the J-WAFS Solutions Program has been vital,” says Swager. “It has provided us with a bridge between the academic world and the business world and allowed us to perform more detailed work to create a usable application,” he adds. In 2022, Xibus developed a product called XiSafe, which enables the detection of contaminants like salmonella and listeria faster and with higher sensitivity than other food testing products. The innovation could save food processors billions of dollars worldwide and prevent thousands of food-borne fatalities annually.

J-WAFS Solutions companies have raised nearly $66 million in venture capital and other funding. Just this past June, J-WAFS spinout SiTration announced that it raised an $11.8 million seed round. Jeffrey Grossman, a professor in MIT’s Department of Materials Science and Engineering, was another early J-WAFS Solutions grantee for his work on low-cost energy-efficient filters for desalination. The project enabled the development of nanoporous membranes and resulted in two spinout companies, Via Separations and SiTration. SiTration was co-founded by Brendan Smith PhD ’18, who was a part of the original J-WAFS team. Smith is CEO of the company and has overseen the advancement of the membrane technology, which has gone on to reduce cost and resource consumption in industrial wastewater treatment, advanced manufacturing, and resource extraction of materials such as lithium, cobalt, and nickel from recycled electric vehicle batteries. The company also recently announced that it is working with the mining company Rio Tinto to handle harmful wastewater generated at mines.

But it's not just J-WAFS spinout companies that are producing real-world results. Products like the ECC Vial — a portable, low-cost method for E. coli detection in water — have been brought to the market and helped thousands of people. The test kit was developed by MIT D-Lab Lecturer Susan Murcott and Professor Jeffrey Ravel of the MIT History Section. The duo received a J-WAFS Solutions grant in 2018 to promote safely managed drinking water and improved public health in Nepal, where it is difficult to identify which wells are contaminated by E. coli. By the end of their grant period, the team had manufactured approximately 3,200 units, of which 2,350 were distributed — enough to help 12,000 people in Nepal. The researchers also trained local Nepalese on best manufacturing practices.

“It’s very important, in my life experience, to follow your dream and to serve others,” says Murcott. Economic success is important to the health of any venture, whether it’s a company or a product, but equally important is the social impact — a philosophy that J-WAFS research strives to uphold. “Do something because it’s worth doing and because it changes people’s lives and saves lives,” Murcott adds.

As J-WAFS prepares to celebrate its 10th anniversary this year, we look forward to continued collaboration with MIT’s many I&E programs to advance knowledge and develop solutions that will have tangible effects on the world’s water and food systems.

Learn more about the J-WAFS Solutions program and about innovation and entrepreneurship at MIT.


3 Questions: Bridging anthropology and engineering for clean energy in Mongolia

Anthropologists Manduhai Buyandelger and Lauren Bonilla discuss the humanistic perspective they bring to a project that is yielding promising results.


In 2021, Michael Short, an associate professor of nuclear science and engineering, approached professor of anthropology Manduhai Buyandelger with an unusual pitch: collaborating on a project to prototype a molten salt heat bank in Mongolia, Buyandelger’s country of origin and place of her scholarship. It was also an invitation to forge a novel partnership between two disciplines that rarely overlap. Developed in collaboration with the National University of Mongolia (NUM), the device was built to provide heat for people in colder climates, and in places where clean energy is a challenge. 

Buyandelger and Short teamed up to launch Anthro-Engineering Decarbonization at the Million-Person Scale, an initiative intended to advance the heat bank idea in Mongolia, and ultimately demonstrate its potential as a scalable clean heat source in comparably challenging sites around the world. This project received funding from the inaugural MIT Climate and Sustainability Consortium Seed Awards program. In order to fund various components of the project, especially student involvement and additional staff, the project also received support from the MIT Global Seed Fund, New Engineering Education Transformation (NEET), Experiential Learning Office, Vice Provost for International Activities, and d’Arbeloff Fund for Excellence in Education.

As part of this initiative, the partners developed a special topic course in anthropology to teach MIT undergraduates about Mongolia’s unique energy and climate challenges, as well as the historical, social, and economic context in which the heat bank would ideally find a place. The class 21A.S01 (Anthro-Engineering: Decarbonization at the Million-Person Scale) prepares MIT students for a January Independent Activities Period (IAP) trip to the Mongolian capital of Ulaanbaatar, where they embed with Mongolian families, conduct research, and collaborate with their peers. Mongolian students also engaged in the project. Anthropology research scientist and lecturer Lauren Bonilla, who has spent the past two decades working in Mongolia, joined to co-teach the class and lead the IAP trips to Mongolia. 

With the project now in its third year and yielding some promising solutions on the ground, Buyandelger and Bonilla reflect on the challenges for anthropologists of advancing a clean energy technology in a developing nation with a unique history, politics, and culture. 

Q: Your roles in the molten salt heat bank project mark departures from your typical academic routine. How did you first approach this venture?

Buyandelger: As an anthropologist of contemporary religion, politics, and gender in Mongolia, I have had little contact with the hard sciences or building or prototyping technology. What I do best is listening to people and working with narratives. When I first learned about this device for off-the-grid heating, a host of issues came straight to mind right away that are based on socioeconomic and cultural context of the place. The salt brick, which is encased in steel, must be heated to 400 degrees Celsius in a central facility, then driven to people’s homes. Transportation is difficult in Ulaanbaatar, and I worried about road safety when driving the salt brick to gers [traditional Mongolian homes] where many residents live. The device seemed a bit utopian to me, but I realized that this was an amazing educational opportunity: We could use the heat bank as part of an ethnographic project, so students could learn about the everyday lives of people — crucially, in the dead of winter — and how they might respond to this new energy technology in the neighborhoods of Ulaanbaatar.

Bonilla: When I first went to Mongolia in the early 2000s as an undergraduate student, the impacts of climate change were already being felt. There had been a massive migration to the capital after a series of terrible weather events that devastated the rural economy. Coal mining had emerged as a vital part of the economy, and I was interested in how people regarded this industry that both provided jobs and damaged the air they breathed. I am trained as a human geographer, which involves seeing how things happening in a local place correspond to things happening at a global scale. Thinking about climate or sustainability from this perspective means making linkages between social life and environmental life. In Mongolia, people associated coal with national progress. Based on historical experience, they had low expectations for interventions brought by outsiders to improve their lives. So my first take on the molten salt project was that this was no silver bullet solution. At the same time, I wanted to see how we could make this a great project-based learning experience for students, getting them to think about the kind of research necessary to see if some version of the molten salt would work.

Q: After two years, what lessons have you and the students drawn from both the class and the Ulaanbaatar field trips?

Buyandelger: We wanted to make sure MIT students would not go to Mongolia and act like consultants. We taught them anthropological methods so they could understand the experiences of real people and think about how to bring people and new technologies together. The students, from engineering and anthropological and social science backgrounds, became critical thinkers who could analyze how people live in ger districts. When they stay with families in Ulaanbaatar in January, they not only experience the cold and the pollution, but they observe what people do for work, how parents care for their children, how they cook, sleep, and get from one place to another. This enables them to better imagine and test out how these people might utilize the molten salt heat bank in their homes.

Bonilla: In class, students learn that interventions like this often fail because the implementation process doesn’t work, or the technology doesn’t meet people’s real needs. This is where anthropology is so important, because it opens up the wider landscape in which you’re intervening. We had really difficult conversations about the professional socialization of engineers and social scientists. Engineers love to work within boxes, but don’t necessarily appreciate the context in which their invention will serve.

As a group, we discussed the provocative notion that engineers construct and anthropologists deconstruct. This makes it seem as if engineers are creators, and anthropologists are brought in as add-ons to consult and critique engineers’ creations. Our group conversation concluded that a project such as ours benefits from an iterative back-and-forth between the techno-scientific and humanistic disciplines.

Q: So where does the molten salt brick project stand?

Bonilla: Our research in Mongolia helped us produce a prototype that can work: Our partners at NUM are developing a hybrid stove that incorporates the molten salt brick. Supervised by instructor Nathan Melenbrink of MIT’s NEET program, our engineering students have been involved in this prototyping as well.

The concept is for a family to heat it up using a coal fire once a day and it warms their home overnight. Based on our anthropological research, we believe that this stove would work better than the device as originally conceived. It won’t eliminate coal use in residences, but it will reduce emissions enough to have a meaningful impact on ger districts in Ulaanbaatar. The challenge now is getting funding to NUM so they can test different salt combinations and stove models and employ local blacksmiths to work on the design.

This integrated stove/heat bank will not be the ultimate solution to the heating and pollution crisis in Mongolia. But it will be something that can inspire even more ideas. We feel with this project we are planting all kinds of seeds that will germinate in ways we cannot anticipate. It has sparked new relationships between MIT and Mongolian students, and catalyzed engineers to integrate a more humanistic, anthropological perspective in their work.

Buyandelger: Our work illustrates the importance of anthropology in responding to the unpredictable and diverse impacts of climate change. Without our ethnographic research — based on participant observation and interviews, led by Dr. Bonilla, — it would have been impossible to see how the prototyping and modifications could be done, and where the molten salt brick could work and what shape it needed to take. This project demonstrates how indispensable anthropology is in moving engineering out of labs and companies and directly into communities.

Bonilla: This is where the real solutions for climate change are going to come from. Even though we need solutions quickly, it will also take time for new technologies like molten salt bricks to take root and grow. We don’t know where the outcomes of these experiments will take us. But there’s so much that’s emerging from this project that I feel very hopeful about.


An interstellar instrument takes a final bow

The Plasma Science Experiment aboard NASA’s Voyager 2 spacecraft turns off after 47 years and 15 billion miles.


They planned to fly for four years and to get as far as Jupiter and Saturn. But nearly half a century and 15 billion miles later, NASA’s twin Voyager spacecraft have far exceeded their original mission, winging past the outer planets and busting out of our heliosphere, beyond the influence of the sun. The probes are currently making their way through interstellar space, traveling farther than any human-made object.

Along their improbable journey, the Voyagers made first-of-their-kind observations at all four giant outer planets and their moons using only a handful of instruments, including MIT’s Plasma Science Experiments — identical plasma sensors that were designed and built in the 1970s in Building 37 by MIT scientists and engineers.

The Plasma Science Experiment (also known as the Plasma Spectrometer, or PLS for short) measured charged particles in planetary magnetospheres, the solar wind, and the interstellar medium, the material between stars. Since launching on the Voyager 2 spacecraft in 1977, the PLS has revealed new phenomena near all the outer planets and in the solar wind across the solar system. The experiment played a crucial role in confirming the moment when Voyager 2 crossed the heliosphere and moved outside of the sun’s regime, into interstellar space.

Now, to conserve the little power left on Voyager 2 and prolong the mission’s life, the Voyager scientists and engineers have made the decision to shut off MIT’s Plasma Science Experiment. It’s the first in a line of science instruments that will progressively blink off over the coming years. On Sept. 26, the Voyager 2 PLS sent its last communication from 12.7 billion miles away, before it received the command to shut down.

MIT News spoke with John Belcher, the Class of 1922 Professor of Physics at MIT, who was a member of the original team that designed and built the plasma spectrometers, and John Richardson, principal research scientist at MIT’s Kavli Institute for Astrophysics and Space Research, who is the experiment’s principal investigator. Both Belcher and Richardson offered their reflections on the retirement of this interstellar piece of MIT history.

Q: Looking back at the experiment’s contributions, what are the greatest hits, in terms of what MIT’s Plasma Spectrometer has revealed about the solar system and interstellar space?

Richardson: A key PLS finding at Jupiter was the discovery of the Io torus, a plasma donut surrounding Jupiter, formed from sulphur and oxygen from Io’s volcanos (which were discovered in Voyager images). At Saturn, PLS found a magnetosphere full of water and oxygen that had been knocked off of Saturn’s icy moons. At Uranus and Neptune, the tilt of the magnetic fields led to PLS seeing smaller density features, with Uranus’ plasma disappearing near the planet. Another key PLS observation was of the termination shock, which was the first observation of the plasma at the largest shock in the solar system, where the solar wind stopped being supersonic. This boundary had a huge drop in speed and an increase in the density and temperature of the solar wind. And finally, PLS documented Voyager 2’s crossing of the heliopause by detecting a stopping of outward-flowing plasma. This signaled the end of the solar wind and the beginning of the local interstellar medium (LISM). Although not designed to measure the LISM, PLS constantly measured the interstellar plasma currents beyond the heliosphere. It is very sad to lose this instrument and data!

Belcher: It is important to emphasize that PLS was the result of decades of development by MIT Professor Herbert Bridge (1919-1995) and Alan Lazarus (1931-2014). The first version of the instrument they designed was flown on Explorer 10 in 1961. And the most recent version is flying on the Solar Probe, which is collecting measurements very close to the sun to understand the origins of solar wind. Bridge was the principal investigator for plasma probes on spacecraft which visited the sun and every major planetary body in the solar system.

Q: During their tenure aboard the Voyager probes, how did the plasma sensors do their job over the last 47 years?

Richardson: There were four Faraday cup detectors designed by Herb Bridge that measured currents from ions and electrons that entered the detectors. By measuring these particles at different energies, we could find the plasma velocity, density, and temperature in the solar wind and in the four planetary magnetospheres Voyager encountered. Voyager data were (and are still) sent to Earth every day and received by NASA’s deep space network of antennae. Keeping two 1970s-era spacecraft going for 47 years and counting has been an amazing feat of JPL engineering prowess — you can google the most recent rescue when Voyager 1 lost some memory in November of 2023 and stopped sending data. JPL figured out the problem and was able to reprogram the flight data system from 15 billion miles away, and all is back to normal now. Shutting down PLS involves sending a command which will get to Voyager 2 about 19 hours later, providing the rest of the spacecraft enough power to continue.

Q: Once the plasma sensors have shut down, how much more could Voyager do, and how far might it still go?

Richardson: Voyager will still measure the galactic cosmic rays, magnetic fields, and plasma waves. The available power decreases about 4 watts per year as the plutonium which powers them decays. We hope to keep some of the instruments running until the mid-2030s, but that will be a challenge as power levels decrease.

Belcher: Nick Oberg at the Kapteyn Astronomical Institute in the Netherlands has made an exhaustive study of the future of the spacecraft, using data from the European Space Agency’s spacecraft Gaia. In about 30,000 years, the spacecraft will reach the distance to the nearest stars. Because space is so vast, there is zero chance that the spacecraft will collide directly with a star in the lifetime of the universe. However, the spacecraft surface will erode by microcollisions with vast clouds of interstellar dust, but this happens very slowly. 

In Oberg’s estimate, the Golden Records [identical records that were placed aboard each probe, that contain selected sounds and images to represent life on Earth] are likely to survive for a span of over 5 billion years. After those 5 billion years, things are difficult to predict, since at this point, the Milky Way will collide with its massive neighbor, the Andromeda galaxy. During this collision, there is a one in five chance that the spacecraft will be flung into the intergalactic medium, where there is little dust and little weathering. In that case, it is possible that the spacecraft will survive for trillions of years. A trillion years is about 100 times the current age of the universe. The Earth ceases to exist in about 6 billion years, when the sun enters its red giant phase and engulfs it.

In a “poor man’s” version of the Golden Record, Robert Butler, the chief engineer of the Plasma Instrument, inscribed the names of the MIT engineers and scientists who had worked on the spacecraft on the collector plate of the side-looking cup. Butler’s home state was New Hampshire, and he put the state motto, “Live Free or Die,” at the top of the list of names. Thanks to Butler, although New Hampshire will not survive for a trillion years, its state motto might. The flight spare of the PLS instrument is now displayed at the MIT Museum, where you can see the text of Butler’s message by peering into the side-looking sensor. 


Q&A: A new initiative to help strengthen democracy

David Singer, head of the MIT Department of Political Science, discusses the Strengthening Democracy Initiative, focused on the rigorous study of elections, public opinion, and political participation.


In the United States and around the world, democracy is under threat. Anti-democratic attitudes have become more prevalent, partisan polarization is growing, misinformation is omnipresent, and politicians and citizens sometimes question the integrity of elections. 

With this backdrop, the MIT Department of Political Science is launching an effort to establish a Strengthening Democracy Initiative. In this Q&A, department head David Singer, the Raphael Dorman-Helen Starbuck Professor of Political Science, discusses the goals and scope of the initiative.

Q: What is the purpose of the Strengthening Democracy Initiative?

A: Well-functioning democracies require accountable representatives, accurate and freely available information, equitable citizen voice and participation, free and fair elections, and an abiding respect for democratic institutions. It is unsettling for the political science community to see more and more evidence of democratic backsliding in Europe, Latin America, and even here in the U.S. While we cannot single-handedly stop the erosion of democratic norms and practices, we can focus our energies on understanding and explaining the root causes of the problem, and devising interventions to maintain the healthy functioning of democracies.

MIT political science has a history of generating important research on many facets of the democratic process, including voting behavior, election administration, information and misinformation, public opinion and political responsiveness, and lobbying. The goals of the Strengthening Democracy Initiative are to place these various research programs under one umbrella, to foster synergies among our various research projects and between political science and other disciplines, and to mark MIT as the country’s leading center for rigorous, evidence-based analysis of democratic resiliency.

Q: What is the initiative’s research focus?

A: The initiative is built upon three research pillars. One pillar is election science and administration. Democracy cannot function without well-run elections and, just as important, popular trust in those elections. Even within the U.S., let alone other countries, there is tremendous variation in the electoral process: whether and how people register to vote, whether they vote in person or by mail, how polling places are run, how votes are counted and validated, and how the results are communicated to citizens.

The MIT Election Data and Science Lab is already the country’s leading center for the collection and analysis of election-related data and dissemination of electoral best practices, and it is well positioned to increase the scale and scope of its activities.

The second pillar is public opinion, a rich area of study that includes experimental studies of public responses to misinformation and analyses of government responsiveness to mass attitudes. Our faculty employ survey and experimental methods to study a range of substantive areas, including taxation and health policy, state and local politics, and strategies for countering political rumors in the U.S. and abroad. Faculty research programs form the basis for this pillar, along with longstanding collaborations such as the Political Experiments Research Lab, an annual omnibus survey in which students and faculty can participate, and frequent conferences and seminars.

The third pillar is political participation, which includes the impact of the criminal justice system and other negative interactions with the state on voting, the creation of citizen assemblies, and the lobbying behavior of firms on Congressional legislation. Some of this research relies on machine learning and AI to cull and parse an enormous amount of data, giving researchers visibility into phenomena that were previously difficult to analyze. A related research area on political deliberation brings together computer science, AI, and the social sciences to analyze the dynamics of political discourse in online forums and the possible interventions that can attenuate political polarization and foster consensus.

The initiative’s flexible design will allow for new pillars to be added over time, including international and homeland security, strengthening democracies in different regions of the world, and tackling new challenges to democratic processes that we cannot see yet.

Q: Why is MIT well-suited to host this new initiative?

A: Many people view MIT as a STEM-focused, highly technical place. And indeed it is, but there is a tremendous amount of collaboration across and within schools at MIT — for example, between political science and the Schwarzman College of Computing and the Sloan School of Management, and between the social science fields and the schools of science and engineering. The Strengthening Democracy Initiative will benefit from these collaborations and create new bridges between political science and other fields. It’s also important to note that this is a nonpartisan research endeavor. The MIT political science department has a reputation for rigorous, data-driven approaches to the study of politics, and its position within the MIT ecosystem will help us to maintain a reputation as an “honest broker,” and to disseminate path-breaking, evidence-based research and interventions to help democracies become more resilient.

Q: Will the new initiative have an educational mission?

A: Of course! The department has a long history of bringing in scores of undergraduate researchers via MIT’s Undergraduate Research Opportunities Program. The initiative will be structured to provide these students with opportunities to study various facets of the democratic process, and for faculty to have a ready pool of talented students to assist with their projects. My hope is to provide students with the resources and opportunities to test their own theories by designing and implementing surveys in the U.S. and abroad, and use insights and tools from computer science, applied statistics, and other disciplines to study political phenomena. As the initiative grows, I expect more opportunities for students to collaborate with state and local officials on improvements to election administration, and to study new puzzles related to healthy democracies.

Postdoctoral researchers will also play a prominent role by advancing research across the initiative’s pillars, supervising undergraduate researchers, and handling some of the administrative aspects of the work.

Q: This sounds like a long-term endeavor. Do you expect this initiative to be permanent?

A: Yes. We already have the pieces in place to create a leading center for the study of healthy democracies (and how to make them healthier). But we need to build capacity, including resources for a pool of researchers to shift from one project to another, which will permit synergies between projects and foster new ones. A permanent initiative will also provide the infrastructure for faculty and students to respond swiftly to current events and new research findings — for example, by launching a nationwide survey experiment, or collecting new data on an aspect of the electoral process, or testing the impact of a new AI technology on political perceptions. As I like to tell our supporters, there are new challenges to healthy democracies that were not on our radar 10 years ago, and no doubt there will be others 10 years from now that we have not imagined. We need to be prepared to do the rigorous analysis on whatever challenges come our way. And MIT Political Science is the best place in the world to undertake this ambitious agenda in the long term.


AI simulation gives people a glimpse of their potential future self

By enabling users to chat with an older version of themselves, Future You is aimed at reducing anxiety and guiding young people to make better choices.


Have you ever wanted to travel through time to see what your future self might be like? Now, thanks to the power of generative AI, you can.

Researchers from MIT and elsewhere created a system that enables users to have an online, text-based conversation with an AI-generated simulation of their potential future self.

Dubbed Future You, the system is aimed at helping young people improve their sense of future self-continuity, a psychological concept that describes how connected a person feels with their future self.

Research has shown that a stronger sense of future self-continuity can positively influence how people make long-term decisions, from one’s likelihood to contribute to financial savings to their focus on achieving academic success.

Future You utilizes a large language model that draws on information provided by the user to generate a relatable, virtual version of the individual at age 60. This simulated future self can answer questions about what someone’s life in the future could be like, as well as offer advice or insights on the path they could follow.

In an initial user study, the researchers found that after interacting with Future You for about half an hour, people reported decreased anxiety and felt a stronger sense of connection with their future selves.

“We don’t have a real time machine yet, but AI can be a type of virtual time machine. We can use this simulation to help people think more about the consequences of the choices they are making today,” says Pat Pataranutaporn, a recent Media Lab doctoral graduate who is actively developing a program to advance human-AI interaction research at MIT, and co-lead author of a paper on Future You.

Pataranutaporn is joined on the paper by co-lead authors Kavin Winson, a researcher at KASIKORN Labs; and Peggy Yin, a Harvard University undergraduate; as well as Auttasak Lapapirojn and Pichayoot Ouppaphan of KASIKORN Labs; and senior authors Monchai Lertsutthiwong, head of AI research at the KASIKORN Business-Technology Group; Pattie Maes, the Germeshausen Professor of Media, Arts, and Sciences and head of the Fluid Interfaces group at MIT, and Hal Hershfield, professor of marketing, behavioral decision making, and psychology at the University of California at Los Angeles. The research will be presented at the IEEE Conference on Frontiers in Education.

A realistic simulation

Studies about conceptualizing one’s future self go back to at least the 1960s. One early method aimed at improving future self-continuity had people write letters to their future selves. More recently, researchers utilized virtual reality goggles to help people visualize future versions of themselves.

But none of these methods were very interactive, limiting the impact they could have on a user.

With the advent of generative AI and large language models like ChatGPT, the researchers saw an opportunity to make a simulated future self that could discuss someone’s actual goals and aspirations during a normal conversation.

“The system makes the simulation very realistic. Future You is much more detailed than what a person could come up with by just imagining their future selves,” says Maes.

Users begin by answering a series of questions about their current lives, things that are important to them, and goals for the future.

The AI system uses this information to create what the researchers call “future self memories” which provide a backstory the model pulls from when interacting with the user.

For instance, the chatbot could talk about the highlights of someone’s future career or answer questions about how the user overcame a particular challenge. This is possible because ChatGPT has been trained on extensive data involving people talking about their lives, careers, and good and bad experiences.

The user engages with the tool in two ways: through introspection, when they consider their life and goals as they construct their future selves, and retrospection, when they contemplate whether the simulation reflects who they see themselves becoming, says Yin.

“You can imagine Future You as a story search space. You have a chance to hear how some of your experiences, which may still be emotionally charged for you now, could be metabolized over the course of time,” she says.

To help people visualize their future selves, the system generates an age-progressed photo of the user. The chatbot is also designed to provide vivid answers using phrases like “when I was your age,” so the simulation feels more like an actual future version of the individual.

The ability to take advice from an older version of oneself, rather than a generic AI, can have a stronger positive impact on a user contemplating an uncertain future, Hershfield says.

“The interactive, vivid components of the platform give the user an anchor point and take something that could result in anxious rumination and make it more concrete and productive,” he adds.

But that realism could backfire if the simulation moves in a negative direction. To prevent this, they ensure Future You cautions users that it shows only one potential version of their future self, and they have the agency to change their lives. Providing alternate answers to the questionnaire yields a totally different conversation.

“This is not a prophesy, but rather a possibility,” Pataranutaporn says.

Aiding self-development

To evaluate Future You, they conducted a user study with 344 individuals. Some users interacted with the system for 10-30 minutes, while others either interacted with a generic chatbot or only filled out surveys.

Participants who used Future You were able to build a closer relationship with their ideal future selves, based on a statistical analysis of their responses. These users also reported less anxiety about the future after their interactions. In addition, Future You users said the conversation felt sincere and that their values and beliefs seemed consistent in their simulated future identities.

“This work forges a new path by taking a well-established psychological technique to visualize times to come — an avatar of the future self — with cutting edge AI. This is exactly the type of work academics should be focusing on as technology to build virtual self models merges with large language models,” says Jeremy Bailenson, the Thomas More Storke Professor of Communication at Stanford University, who was not involved with this research.

Building off the results of this initial user study, the researchers continue to fine-tune the ways they establish context and prime users so they have conversations that help build a stronger sense of future self-continuity.

“We want to guide the user to talk about certain topics, rather than asking their future selves who the next president will be,” Pataranutaporn says.

They are also adding safeguards to prevent people from misusing the system. For instance, one could imagine a company creating a “future you” of a potential customer who achieves some great outcome in life because they purchased a particular product.

Moving forward, the researchers want to study specific applications of Future You, perhaps by enabling people to explore different careers or visualize how their everyday choices could impact climate change.

They are also gathering data from the Future You pilot to better understand how people use the system.

“We don’t want people to become dependent on this tool. Rather, we hope it is a meaningful experience that helps them see themselves and the world differently, and helps with self-development,” Maes says.

The researchers acknowledge the support of Thanawit Prasongpongchai, a designer at KBTG and visiting scientist at the Media Lab.


State of Supply Chain Sustainability report reveals growing investor pressure, challenges with emissions tracking

The 2024 report highlights five years of global progress but uncovers gaps between companies’ sustainability goals and the investments required to achieve them.


The MIT Center for Transportation and Logistics (MIT CTL) and the Council of Supply Chain Management Professionals (CSCMP) have released the 2024 State of Supply Chain Sustainability report, marking the fifth edition of this influential research. The report highlights how supply chain sustainability practices have evolved over the past five years, assessing their global implementation and implications for industries, professionals, and the environment.

This year’s report is based on four years of comprehensive international surveys with responses from over 7,000 supply chain professionals representing more than 80 countries, coupled with insights from executive interviews. It explores how external pressures on firms, such as the growing investor demand and climate regulations, are driving sustainability initiatives. However, it also reveals persistent gaps between companies’ sustainability goals and the actual investments required to achieve them.

"Over the past five years, we have seen supply chains face unprecedented global challenges. While companies have made strides, our analysis shows that many are still struggling to align their sustainability ambitions with real progress, particularly when it comes to tackling Scope 3 emissions," says Josué Velázquez Martínez, MIT CTL research scientist and lead investigator. "Scope 3 emissions, which account for the vast majority of a company’s carbon footprint, remain a major hurdle due to the complexity of tracking emissions from indirect supply chain activities. The margin of error of the most common approach to estimate emissions are drastic, which disincentivizes companies to make more sustainable choices at the expense of investing in green alternatives."

Among the key findings:

Mark Baxa, president and CEO of CSCMP, emphasized the importance of collaboration: "Businesses and consumers alike are putting pressure on us to source and supply products to live up to their social and environmental standards. The State of Supply Chain Sustainability 2024 provides a thorough analysis of our current understanding, along with valuable insights on how to improve our Scope 3 emissions accounting to have a greater impact on lowering our emissions."

The report also underscores the importance of technological innovations, such as machine learning, advanced data analytics, and standardization to improve the accuracy of emissions tracking and help firms make data-driven sustainability decisions.

The 2024 State of Supply Chain Sustainability can be accessed online or in PDF format at sustainable.mit.edu.

The MIT CTL is a world leader in supply chain management research and education, with over 50 years of expertise. The center's work spans industry partnerships, cutting-edge research, and the advancement of sustainable supply chain practices. CSCMP is the leading global association for supply chain professionals. Established in 1963, CSCMP provides its members with education, research, and networking opportunities to advance the field of supply chain management.


Aligning economic and regulatory frameworks for today’s nuclear reactor technology

Today’s regulations for nuclear reactors are unprepared for how the field is evolving. PhD student Liam Hines wants to ensure that policy keeps up with the technology.


Liam Hines ’22 didn't move to Sarasota, Florida, until high school, but he’s a Floridian through and through. He jokes that he’s even got a floral shirt, what he calls a “Florida formal,” for every occasion.

Which is why it broke his heart when toxic red algae used to devastate the Sunshine State’s coastline, including at his favorite beach, Caspersen. The outbreak made headline news during his high school years, with the blooms destroying marine wildlife and adversely impacting the state’s tourism-driven economy.

In Florida, Hines says, environmental awareness is pretty high because everyday citizens are being directly impacted by climate change. After all, it’s hard not to worry when beautiful white sand beaches are covered in dead fish. Ongoing concerns about the climate cemented Hines’ resolve to pick a career that would have a strong “positive environmental impact.” He chose nuclear, as he saw it as “a green, low-carbon-emissions energy source with a pretty straightforward path to implementation.”

Undergraduate studies at MIT

Knowing he wanted a career in the sciences, Hines applied and got accepted to MIT for undergraduate studies in fall 2018. An orientation program hosted by the Department of Nuclear Science and Engineering (NSE) sold him on the idea of pursuing the field. “The department is just a really tight-knit community, and that really appealed to me,” Hines says.

During his undergraduate years, Hines realized he needed a job to pay part of his bills. “Instead of answering calls at the dorm front desk or working in the dining halls, I decided I’m going to become a licensed nuclear operator onsite,” he says. “Reactor operations offer so much hands-on experience with real nuclear systems. It doesn’t hurt that it pays better.” Becoming a licensed nuclear reactor operator is hard work, however, involving a year-long training process studying maintenance, operations, and equipment oversight. A bonus: The job, supervising the MIT Nuclear Reactor Laboratory, taught him the fundamentals of nuclear physics and engineering.

Always interested in research, Hines got an early start by exploring the regulatory challenges of advanced fusion systems. There have been questions related to licensing requirements and the safety consequences of the onsite radionuclide inventory. Hines’ undergraduate research work involved studying precedent for such fusion facilities and comparing them to experimental facilities such as the Tokamak Fusion Test Reactor at the Princeton Plasma Physics Laboratory.

Doctoral focus on legal and regulatory frameworks

When scientists want to make technologies as safe as possible, they have to do two things in concert: First they evaluate the safety of the technology, and then make sure legal and regulatory structures take into account the evolution of these advanced technologies. Hines is taking such a two-pronged approach to his doctoral work on nuclear fission systems.

Under the guidance of Professor Koroush Shirvan, Hines is conducting systems modeling of various reactor cores that include graphite, and simulating operations under long time spans. He then studies radionuclide transport from low-level waste facilities — the consequences of offsite storage after 50 or 100 or even 10,000 years of storage. The work has to make sure to hit safety and engineering margins, but also tread a fine line. “You want to make sure you’re not over-engineering systems and adding undue cost, but also making sure to assess the unique hazards of these advanced technologies as accurately as possible,” Hines says.

On a parallel track, under Professor Haruko Wainwright’s advisement, Hines is applying the current science on radionuclide geochemistry to track radionuclide wastes and map their profile for hazards. One of the challenges fission reactors face is that existing low-level waste regulations were fine-tuned to old reactors. Regulations have not kept up: “Now that we have new technologies with new wastes, some of the hazards of the new waste are completely missed by existing standards,” Hines says. He is working to seal these gaps.

A philosophy-driven outlook

Hines is grateful for the dynamic learning environment at NSE. “A lot of the faculty have that go-getter attitude,” he points out, impressed by the entrepreneurial spirit on campus. “It’s made me confident to really tackle the things that I care about.”

An ethics class as an undergraduate made Hines realize there were discussions in class he could apply to the nuclear realm, especially when it came to teasing apart the implications of the technology — where the devices would be built and who they would serve. He eventually went on to double-major in NSE and philosophy.

The framework style of reading and reasoning involved in studying philosophy is particularly relevant in his current line of work, where he has to extract key points regarding nuclear regulatory issues. Much like philosophy discussions today that involve going over material that has been discussed for centuries and framing them through new perspectives, nuclear regulatory issues too need to take the long view.

“In philosophy, we have to insert ourselves into very large conversations. Similarly, in nuclear engineering, you have to understand how to take apart the discourse that’s most relevant to your research and frame it,” Hines says. This technique is especially necessary because most of the time the nuclear regulatory issues might seem like wading in the weeds of nitty-gritty technical matters, but they can have a huge impact on the public and public perception, Hines adds.

As for Florida, Hines visits every chance he can get. The red tide still surfaces but not as consistently as it once did. And since he started his job as a nuclear operator in his undergraduate days, Hines has progressed to senior reactor operator. This time around he gets to sign off on the checklists. “It’s much like when I was shift lead at Dunkin’ Donuts in high school,” Hines says, “everyone is kind of doing the same thing, but you get to be in charge for the afternoon.”


Where flood policy helps most — and where it could do more

A U.S. program provides important flood insurance relief, but it’s used more in communities with greater means to protect themselves.


Flooding, including the devastation caused recently by Hurricane Helene, is responsible for $5 billion in annual damages in the U.S. That’s more than any other type of weather-related extreme event.

To address the problem, the federal government instituted a program in 1990 that helps reduce flood insurance costs in communities enacting measures to better handle flooding. If, say, a town preserves open space as a buffer against coastal flooding, or develops better stormwater management, area policy owners get discounts on their premiums. Studies show the program works well: It has reduced overall flood damage in participating communities.

However, a new study led by an MIT researcher shows that the effects of the program differ greatly from place to place. For instance, higher-population communities, which likely have more means to introduce flood defenses, benefit more than smaller communities, to the tune of about $4,000 per insured household.

“When we evaluate it, the effects of the same policy vary widely among different types of communities,” says study co-author Lidia Cano Pecharromán, a PhD candidate in MIT’s Department of Urban Studies and Planning.

Referring to climate and environmental justice concerns, she adds: “It’s important to understand not just if a policy is effective, but who is benefitting, so that we can make necessary adjustments and reach all the targets we want to reach.”

The paper, “Exposing Disparities in Flood Adaptation for Equitable Future Interventions in the USA,” is published today in Nature Communications. The authors are Cano Pecharromán and ChangHoon Hahn, an associate research scholar at Princeton University.

Able to afford help

The program in question was developed by the Federal Emergency Management Agency (FEMA), which has a division, the Flood Insurance Mitigation Administration, focusing on this issue. In 1990, FEMA initiated the National Flood Insurance Program’s Community Rating System, which incentivizes communities to enact measures that help prevent or reduce flooding.

Communities can engage in a broad set of related activities, including floodplain mapping, preservation of open spaces, stormwater management activities, creating flood warning systems, or even developing public information and participation programs. In exchange, area residents receive a discount on their flood insurance premium rates.

To conduct the study, the researchers examined 2.5 million flood insurance claims filed with FEMA since then. They also examined U.S. Census Bureau data to analyze demographic and economic data about communities, and incorporated flood risk data from the First Street Foundation.

By comparing over 1,500 communities in the FEMA program, the researchers were able to quantify its different relative effects — depending on community characteristics such as population, race, income or flood risk. For instance, higher-income communities seem better able to make more flood-control and mitigation investments, earning better FEMA ratings and, ultimately, enacting more effective measures.

“You see some positive effects for low-income communities, but as the risks go up, these disappear, while only high-income communities continue seeing these positive effects,” says Cano Pecharromán. “They are likely able to afford measures that handle a higher risk indices for flooding.”

Similarly, the researchers found, communities with higher overall levels of education fare better from the flood-insurance program, with about $2,000 more in savings per individual policy than communities with lower levels of education. One way or another, communities with more assets in the first place — size, wealth, education — are better able to deploy or hire the civic and technical expertise necessary to enact more best practices against flood damage.

And even among lower-income communities in the program, communities with less population diversity see greater effectiveness from their flood program activities, realizing a gain of about $6,000 per household compared to communities where racial and ethnic minorities are predominant.

“These are substantial effects, and we should consider these things when making decisions and reviewing if our climate adaptation policies work,” Cano Pecharromán says.

An even larger number of communities is not in the FEMA program at all. The study identified 14,729 unique U.S. communities with flood issues. Many of those are likely lacking the capacity to engage on flooding issues the way even the lower-ranked communities within the FEMA program have at least taken some action so far.

“If we are able to consider all the communities that are not in the program because they can’t afford to do the basics, we would likely see that the effects are even larger among different communities,” Cano Pecharromán says.

Getting communities started

To make the program more effective for more people, Cano Pecharromán suggests that the federal government should consider how to help communities enact flood-control and mitigation measures in the first place.

“When we set out these kinds of policies, we need to consider how certain types of communities might need help with implementation,” she says.

Methodologically, the researchers arrived at their conclusions using an advanced statistical approach that Hahn, who is an astrophysicist by training, has applied to the study of dark energy and galaxies. Instead of finding one “average treatment effect” of the FEMA program across all participating communities, they quantified the program’s impact while subdividing the set of participating set of communities according to their characteristics.

“We are able to calculate the causal effect of [the program], not as an average, which can hide these inequalities, but at every given level of the specific characteristic of communities we’re looking at, different levels of income, different levels of education, and more,” Cano Pecharromán says.

Government officials have seen Cano Pecharromán present the preliminary findings at meetings, and expressed interest in the results. Currently, she is also working on a follow-up study, which aims to pinpoint which types of local flood-mitigation programs provide the biggest benefits for local communities.

Support for the research was provided, in part, by the La Caixa Foundation, the MIT Martin Family Society of Fellows for Sustainability, and the AI Accelerator program of the Schmidt Sciences.


Helping robots zero in on the objects that matter

A new method called Clio enables robots to quickly map a scene and identify the items they need to complete a given set of tasks.


Imagine having to straighten up a messy kitchen, starting with a counter littered with sauce packets. If your goal is to wipe the counter clean, you might sweep up the packets as a group. If, however, you wanted to first pick out the mustard packets before throwing the rest away, you would sort more discriminately, by sauce type. And if, among the mustards, you had a hankering for Grey Poupon, finding this specific brand would entail a more careful search.

MIT engineers have developed a method that enables robots to make similarly intuitive, task-relevant decisions.

The team’s new approach, named Clio, enables a robot to identify the parts of a scene that matter, given the tasks at hand. With Clio, a robot takes in a list of tasks described in natural language and, based on those tasks, it then determines the level of granularity required to interpret its surroundings and “remember” only the parts of a scene that are relevant.

In real experiments ranging from a cluttered cubicle to a five-story building on MIT’s campus, the team used Clio to automatically segment a scene at different levels of granularity, based on a set of tasks specified in natural-language prompts such as “move rack of magazines” and “get first aid kit.”

The team also ran Clio in real-time on a quadruped robot. As the robot explored an office building, Clio identified and mapped only those parts of the scene that related to the robot’s tasks (such as retrieving a dog toy while ignoring piles of office supplies), allowing the robot to grasp the objects of interest.

Clio is named after the Greek muse of history, for its ability to identify and remember only the elements that matter for a given task. The researchers envision that Clio would be useful in many situations and environments in which a robot would have to quickly survey and make sense of its surroundings in the context of its given task.

“Search and rescue is the motivating application for this work, but Clio can also power domestic robots and robots working on a factory floor alongside humans,” says Luca Carlone, associate professor in MIT’s Department of Aeronautics and Astronautics (AeroAstro), principal investigator in the Laboratory for Information and Decision Systems (LIDS), and director of the MIT SPARK Laboratory. “It’s really about helping the robot understand the environment and what it has to remember in order to carry out its mission.”

The team details their results in a study appearing today in the journal Robotics and Automation Letters. Carlone’s co-authors include members of the SPARK Lab: Dominic Maggio, Yun Chang, Nathan Hughes, and Lukas Schmid; and members of MIT Lincoln Laboratory: Matthew Trang, Dan Griffith, Carlyn Dougherty, and Eric Cristofalo.

Open fields

Huge advances in the fields of computer vision and natural language processing have enabled robots to identify objects in their surroundings. But until recently, robots were only able to do so in “closed-set” scenarios, where they are programmed to work in a carefully curated and controlled environment, with a finite number of objects that the robot has been pretrained to recognize.

In recent years, researchers have taken a more “open” approach to enable robots to recognize objects in more realistic settings. In the field of open-set recognition, researchers have leveraged deep-learning tools to build neural networks that can process billions of images from the internet, along with each image’s associated text (such as a friend’s Facebook picture of a dog, captioned “Meet my new puppy!”).

From millions of image-text pairs, a neural network learns from, then identifies, those segments in a scene that are characteristic of certain terms, such as a dog. A robot can then apply that neural network to spot a dog in a totally new scene.

But a challenge still remains as to how to parse a scene in a useful way that is relevant for a particular task.

“Typical methods will pick some arbitrary, fixed level of granularity for determining how to fuse segments of a scene into what you can consider as one ‘object,’” Maggio says. “However, the granularity of what you call an ‘object’ is actually related to what the robot has to do. If that granularity is fixed without considering the tasks, then the robot may end up with a map that isn’t useful for its tasks.”

Information bottleneck

With Clio, the MIT team aimed to enable robots to interpret their surroundings with a level of granularity that can be automatically tuned to the tasks at hand.

For instance, given a task of moving a stack of books to a shelf, the robot should be able to  determine that the entire stack of books is the task-relevant object. Likewise, if the task were to move only the green book from the rest of the stack, the robot should distinguish the green book as a single target object and disregard the rest of the scene — including the other books in the stack.

The team’s approach combines state-of-the-art computer vision and large language models comprising neural networks that make connections among millions of open-source images and semantic text. They also incorporate mapping tools that automatically split an image into many small segments, which can be fed into the neural network to determine if certain segments are semantically similar. The researchers then leverage an idea from classic information theory called the “information bottleneck,” which they use to compress a number of image segments in a way that picks out and stores segments that are semantically most relevant to a given task.

“For example, say there is a pile of books in the scene and my task is just to get the green book. In that case we push all this information about the scene through this bottleneck and end up with a cluster of segments that represent the green book,” Maggio explains. “All the other segments that are not relevant just get grouped in a cluster which we can simply remove. And we’re left with an object at the right granularity that is needed to support my task.”

The researchers demonstrated Clio in different real-world environments.

“What we thought would be a really no-nonsense experiment would be to run Clio in my apartment, where I didn’t do any cleaning beforehand,” Maggio says.

The team drew up a list of natural-language tasks, such as “move pile of clothes” and then applied Clio to images of Maggio’s cluttered apartment. In these cases, Clio was able to quickly segment scenes of the apartment and feed the segments through the Information Bottleneck algorithm to identify those segments that made up the pile of clothes.

They also ran Clio on Boston Dynamic’s quadruped robot, Spot. They gave the robot a list of tasks to complete, and as the robot explored and mapped the inside of an office building, Clio ran in real-time on an on-board computer mounted to Spot, to pick out segments in the mapped scenes that visually relate to the given task. The method generated an overlaying map showing just the target objects, which the robot then used to approach the identified objects and physically complete the task.

“Running Clio in real-time was a big accomplishment for the team,” Maggio says. “A lot of prior work can take several hours to run.”

Going forward, the team plans to adapt Clio to be able to handle higher-level tasks and build upon recent advances in photorealistic visual scene representations.

“We’re still giving Clio tasks that are somewhat specific, like ‘find deck of cards,’” Maggio says. “For search and rescue, you need to give it more high-level tasks, like ‘find survivors,’ or ‘get power back on.’ So, we want to get to a more human-level understanding of how to accomplish more complex tasks.”

This research was supported, in part, by the U.S. National Science Foundation, the Swiss National Science Foundation, MIT Lincoln Laboratory, the U.S. Office of Naval Research, and the U.S. Army Research Lab Distributed and Collaborative Intelligent Systems and Technology Collaborative Research Alliance.


How social structure influences the way people share money

A new study shows that belonging to age-based groups, common in some global regions, influences finances and health.


People around the globe often depend on informal financial arrangements, borrowing and lending money through social networks. Understanding this sheds light on local economies and helps fight poverty.

Now, a study co-authored by an MIT economist illuminates a striking case of informal finance: In East Africa, money moves in very different patterns depending on whether local societies are structured around family units or age-based groups.

That is, while much of the world uses the extended family as a basic social unit, hundreds of millions of people live in societies with stronger age-based cohorts. In these cases, people are initiated into adulthood together and maintain closer social ties with each other than with extended family. That affects their finances, too.

“We found there are major impacts in that social structure really does matter for how people form financial ties,” says Jacob Moscona, an MIT economist and co-author of a newly published paper detailing the results.

He adds: “In age-based societies when someone gets a cash transfer, the money flows in a big way to other members of their age cohort but not to other [younger or older] members of an extended family. And you see the exact opposite pattern in kin-based groups, where money is transferred within the family but not the age cohort.”

This leads to measurable health effects. In kin-based societies, grandparents often share their pension payments with grandchildren. In Uganda, the study reveals, an additional year of pension payments to a senior citizen in a kin-based society reduces the likelihood of child malnourishment by 5.5 percent, compared to an age-based society where payments are less likely to move across generations.

The paper, “Age Set versus Kin: Culture and Financial Ties in East Africa,” is published in the September issue of the American Economic Review. The authors are Moscona, the 3M Career Development Assistant Professor of Economics in MIT’s Department of Economics; and Awa Ambra Seck, an assistant professor at Harvard Business School.

Studying informal financial arrangements has long been an important research domain for economists. MIT Professor Robert Townsend, for one, helped advance this area of scholarship with innovative studies of finances in rural Thailand.

At the same time, the specific matter of analyzing how age-based social groups function, in comparison to the more common kin-based groups, has tended to be addressed more by anthropologists than economists. Among the Maasai people in Northern Kenya, for example, anthropologists have observed that age-group friends have closer ties to each other than anyone apart from a spouse and children. Maasai age-group cohorts frequently share food and lodging, and more extensively than they do even with siblings. The current study adds economic data points to this body of knowledge.

To conduct the research, the scholars first analyzed the Kenyan government’s Hunger Safety Net Program (HSNP), a cash transfer project initiated in 2009 covering 48 locations in Northern Kenya. The program included both age-based and kin-based social groups, allowing for a comparison of its effects.

In age-based societies, the study shows, there was a spillover in spending by HSNP recipients on others in the age cohort, with zero additional cash flows to those in other generations; in kin-based societies, they also found a spillover across generations, but without informal cash flows otherwise.

In Uganda, where both kin-based and age-based societies exist, the researchers studied the national roll-out of the Senior Citizen Grant (SCG) program, initiated in 2011, which consists of a monthly cash transfer to seniors of about $7.50, equivalent to roughly 20 percent of per-capita spending. Similar programs exist or are being rolled out across sub-Saharan Africa, including in regions where age-based organization is common.

Here again, the researchers found financial flows aligned to kin-based and age-based social ties. In particular, they show that the pension program had large positive effects on child nutrition in kin-based households, where ties across generations are strong; the team found zero evidence of these effects in age-based societies.

“These policies had vastly different effects on these two groups, on account of the very different structure of financial ties,” Moscona says.

To Moscona, there are at least two large reasons to evaluate the variation between these financial flows: understanding society more thoroughly and rethinking how to design social programs in these circumstances.

“It’s telling us something about how the world works, that social structure is really important for shaping these [financial] relationships,” Moscona says. “But it also has a big potential impact on policy.”

After all, if a social policy is designed to help limit childhood poverty, or senior poverty, experts will want to know how the informal flow of cash in a society interacts with it. The current study shows that understanding social structure should be a high-order concern for making policies more effective.

“In these two ways of organizing society, different people are on average more vulnerable,” Moscona says. “In the kin-based groups, because the young and the old share with each other, you don’t see as much inequality across generations. But in age-based groups, the young and the old are left systematically more vulnerable. And in kin-based groups, some entire families are doing much worse than others, while in age-based societies the age sets often cut across lineages or extended families, making them more equal. That’s worth considering if you’re thinking about poverty reduction.”


New security protocol shields data from attackers during cloud-based computation

The technique leverages quantum properties of light to guarantee security while preserving the accuracy of a deep-learning model.


Deep-learning models are being used in many fields, from health care diagnostics to financial forecasting. However, these models are so computationally intensive that they require the use of powerful cloud-based servers.

This reliance on cloud computing poses significant security risks, particularly in areas like health care, where hospitals may be hesitant to use AI tools to analyze confidential patient data due to privacy concerns.

To tackle this pressing issue, MIT researchers have developed a security protocol that leverages the quantum properties of light to guarantee that data sent to and from a cloud server remain secure during deep-learning computations.

By encoding data into the laser light used in fiber optic communications systems, the protocol exploits the fundamental principles of quantum mechanics, making it impossible for attackers to copy or intercept the information without detection.

Moreover, the technique guarantees security without compromising the accuracy of the deep-learning models. In tests, the researcher demonstrated that their protocol could maintain 96 percent accuracy while ensuring robust security measures.

“Deep learning models like GPT-4 have unprecedented capabilities but require massive computational resources. Our protocol enables users to harness these powerful models without compromising the privacy of their data or the proprietary nature of the models themselves,” says Kfir Sulimany, an MIT postdoc in the Research Laboratory for Electronics (RLE) and lead author of a paper on this security protocol.

Sulimany is joined on the paper by Sri Krishna Vadlamani, an MIT postdoc; Ryan Hamerly, a former postdoc now at NTT Research, Inc.; Prahlad Iyengar, an electrical engineering and computer science (EECS) graduate student; and senior author Dirk Englund, a professor in EECS, principal investigator of the Quantum Photonics and Artificial Intelligence Group and of RLE. The research was recently presented at Annual Conference on Quantum Cryptography.

A two-way street for security in deep learning

The cloud-based computation scenario the researchers focused on involves two parties — a client that has confidential data, like medical images, and a central server that controls a deep learning model.

The client wants to use the deep-learning model to make a prediction, such as whether a patient has cancer based on medical images, without revealing information about the patient.

In this scenario, sensitive data must be sent to generate a prediction. However, during the process the patient data must remain secure.

Also, the server does not want to reveal any parts of the proprietary model that a company like OpenAI spent years and millions of dollars building.

“Both parties have something they want to hide,” adds Vadlamani.

In digital computation, a bad actor could easily copy the data sent from the server or the client.

Quantum information, on the other hand, cannot be perfectly copied. The researchers leverage this property, known as the no-cloning principle, in their security protocol.

For the researchers’ protocol, the server encodes the weights of a deep neural network into an optical field using laser light.

A neural network is a deep-learning model that consists of layers of interconnected nodes, or neurons, that perform computation on data. The weights are the components of the model that do the mathematical operations on each input, one layer at a time. The output of one layer is fed into the next layer until the final layer generates a prediction.

The server transmits the network’s weights to the client, which implements operations to get a result based on their private data. The data remain shielded from the server.

At the same time, the security protocol allows the client to measure only one result, and it prevents the client from copying the weights because of the quantum nature of light.

Once the client feeds the first result into the next layer, the protocol is designed to cancel out the first layer so the client can’t learn anything else about the model.

“Instead of measuring all the incoming light from the server, the client only measures the light that is necessary to run the deep neural network and feed the result into the next layer. Then the client sends the residual light back to the server for security checks,” Sulimany explains.

Due to the no-cloning theorem, the client unavoidably applies tiny errors to the model while measuring its result. When the server receives the residual light from the client, the server can measure these errors to determine if any information was leaked. Importantly, this residual light is proven to not reveal the client data.

A practical protocol

Modern telecommunications equipment typically relies on optical fibers to transfer information because of the need to support massive bandwidth over long distances. Because this equipment already incorporates optical lasers, the researchers can encode data into light for their security protocol without any special hardware.

When they tested their approach, the researchers found that it could guarantee security for server and client while enabling the deep neural network to achieve 96 percent accuracy.

The tiny bit of information about the model that leaks when the client performs operations amounts to less than 10 percent of what an adversary would need to recover any hidden information. Working in the other direction, a malicious server could only obtain about 1 percent of the information it would need to steal the client’s data.

“You can be guaranteed that it is secure in both ways — from the client to the server and from the server to the client,” Sulimany says.

“A few years ago, when we developed our demonstration of distributed machine learning inference between MIT’s main campus and MIT Lincoln Laboratory, it dawned on me that we could do something entirely new to provide physical-layer security, building on years of quantum cryptography work that had also been shown on that testbed,” says Englund. “However, there were many deep theoretical challenges that had to be overcome to see if this prospect of privacy-guaranteed distributed machine learning could be realized. This didn’t become possible until Kfir joined our team, as Kfir uniquely understood the experimental as well as theory components to develop the unified framework underpinning this work.”

In the future, the researchers want to study how this protocol could be applied to a technique called federated learning, where multiple parties use their data to train a central deep-learning model. It could also be used in quantum operations, rather than the classical operations they studied for this work, which could provide advantages in both accuracy and security.

“This work combines in a clever and intriguing way techniques drawing from fields that do not usually meet, in particular, deep learning and quantum key distribution. By using methods from the latter, it adds a security layer to the former, while also allowing for what appears to be a realistic implementation. This can be interesting for preserving privacy in distributed architectures. I am looking forward to seeing how the protocol behaves under experimental imperfections and its practical realization,” says Eleni Diamanti, a CNRS research director at Sorbonne University in Paris, who was not involved with this work.

This work was supported, in part, by the Israeli Council for Higher Education and the Zuckerman STEM Leadership Program.


Mars’ missing atmosphere could be hiding in plain sight

A new study shows Mars’ early thick atmosphere could be locked up in the planet’s clay surface.


Mars wasn’t always the cold desert we see today. There’s increasing evidence that water once flowed on the Red Planet’s surface, billions of years ago. And if there was water, there must also have been a thick atmosphere to keep that water from freezing. But sometime around 3.5 billion years ago, the water dried up, and the air, once heavy with carbon dioxide, dramatically thinned, leaving only the wisp of an atmosphere that clings to the planet today.

Where exactly did Mars’ atmosphere go? This question has been a central mystery of Mars’ 4.6-billion-year history.

For two MIT geologists, the answer may lie in the planet’s clay. In a paper appearing today in Science Advances, they propose that much of Mars’ missing atmosphere could be locked up in the planet’s clay-covered crust.

The team makes the case that, while water was present on Mars, the liquid could have trickled through certain rock types and set off a slow chain of reactions that progressively drew carbon dioxide out of the atmosphere and converted it into methane — a form of carbon that could be stored for eons in the planet’s clay surface.

Similar processes occur in some regions on Earth. The researchers used their knowledge of interactions between rocks and gases on Earth and applied that to how similar processes could play out on Mars. They found that, given how much clay is estimated to cover Mars’ surface, the planet’s clay could hold up to 1.7 bar of carbon dioxide, which would be equivalent to around 80 percent of the planet’s initial, early atmosphere.

It’s possible that this sequestered Martian carbon could one day be recovered and converted into propellant to fuel future missions between Mars and Earth, the researchers propose.

“Based on our findings on Earth, we show that similar processes likely operated on Mars, and that copious amounts of atmospheric CO2 could have transformed to methane and been sequestered in clays,” says study author Oliver Jagoutz, professor of geology in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “This methane could still be present and maybe even used as an energy source on Mars in the future.”

The study’s lead author is recent EAPS graduate Joshua Murray PhD ’24.

In the folds

Jagoutz’ group at MIT seeks to identify the geologic processes and interactions that drive the evolution of Earth’s lithosphere — the hard and brittle outer layer that includes the crust and upper mantle, where tectonic plates lie.

In 2023, he and Murray focused on a type of surface clay mineral called smectite, which is known to be a highly effective trap for carbon. Within a single grain of smectite are a multitude of folds, within which carbon can sit undisturbed for billions of years. They showed that smectite on Earth was likely a product of tectonic activity, and that, once exposed at the surface, the clay minerals acted to draw down and store enough carbon dioxide from the atmosphere to cool the planet over millions of years.

Soon after the team reported their results, Jagoutz happened to look at a map of the surface of Mars and realized that much of that planet’s surface was covered in the same smectite clays. Could the clays have had a similar carbon-trapping effect on Mars, and if so, how much carbon could the clays hold?

“We know this process happens, and it is well-documented on Earth. And these rocks and clays exist on Mars,” Jagoutz says. “So, we wanted to try and connect the dots.”

“Every nook and cranny”

Unlike on Earth, where smectite is a consequence of continental plates shifting and uplifting to bring rocks from the mantle to the surface, there is no such tectonic activity on Mars. The team looked for ways in which the clays could have formed on Mars, based on what scientists know of the planet’s history and composition.

For instance, some remote measurements of Mars’ surface suggest that at least part of the planet’s crust contains ultramafic igneous rocks, similar to those that produce smectites through weathering on Earth. Other observations reveal geologic patterns similar to terrestrial rivers and tributaries, where water could have flowed and reacted with the underlying rock.

Jagoutz and Murray wondered whether water could have reacted with Mars’ deep ultramafic rocks in a way that would produce the clays that cover the surface today. They developed a simple model of rock chemistry, based on what is known of how igneous rocks interact with their environment on Earth.

They applied this model to Mars, where scientists believe the crust is mostly made up of igneous rock that is rich in the mineral olivine. The team used the model to estimate the changes that olivine-rich rock might undergo, assuming that water existed on the surface for at least a billion years, and the atmosphere was thick with carbon dioxide.

“At this time in Mars’ history, we think CO2 is everywhere, in every nook and cranny, and water percolating through the rocks is full of CO2 too,” Murray says.

Over about a billion years, water trickling through the crust would have slowly reacted with olivine — a mineral that is rich in a reduced form of iron. Oxygen molecules in water would have bound to the iron, releasing hydrogen as a result and forming the red oxidized iron which gives the planet its iconic color. This free hydrogen would then have combined with carbon dioxide in the water, to form methane. As this reaction progressed over time, olivine would have slowly transformed into another type of iron-rich rock known as serpentine, which then continued to react with water to form smectite.

“These smectite clays have so much capacity to store carbon,” Murray says. “So then we used existing knowledge of how these minerals are stored in clays on Earth, and extrapolate to say, if the Martian surface has this much clay in it, how much methane can you store in those clays?”

He and Jagoutz found that if Mars is covered in a layer of smectite that is 1,100 meters deep, this amount of clay could store a huge amount of methane, equivalent to most of the carbon dioxide in the atmosphere that is thought to have disappeared since the planet dried up.

“We find that estimates of global clay volumes on Mars are consistent with a significant fraction of Mars’ initial CO2 being sequestered as organic compounds within the clay-rich crust,” Murray says. “In some ways, Mars’ missing atmosphere could be hiding in plain sight.”

“Where the CO2 went from an early, thicker atmosphere is a fundamental question in the history of the Mars atmosphere, its climate, and the habitability by microbes,” says Bruce Jakosky, professor emeritus of geology at the University of Colorado and principal investigator on the Mars Atmosphere and Volatile Evolution (MAVEN) mission, which has been orbiting and studying Mars’ upper atmosphere since 2014. Jakosky was not involved with the current study. “Murray and Jagoutz examine the chemical interaction of rocks with the atmosphere as a means of removing CO2. At the high end of our estimates of how much weathering has occurred, this could be a major process in removing CO2 from Mars’ early atmosphere.”

This work was supported, in part, by the National Science Foundation.