Science news from the MIT - Massachusetts Institute of Technology University

Here you find the recent daily science news of the the MIT - Massachusetts Institute of Technology University

MIT News - School of Science
MIT news feed about: School of Science
A new framework to efficiently screen drugs

Novel method to scale phenotypic drug screening drastically reduces the number of input samples, costs, and labor required to execute a screen.


Some of the most widely used drugs today, including penicillin, were discovered through a process called phenotypic screening. Using this method, scientists are essentially throwing drugs at a problem — for example, when attempting to stop bacterial growth or fixing a cellular defect — and then observing what happens next, without necessarily first knowing how the drug works. Perhaps surprisingly, historical data show that this approach is better at yielding approved medicines than those investigations that more narrowly focus on specific molecular targets.

But many scientists believe that properly setting up the problem is the true key to success. Certain microbial infections or genetic disorders caused by single mutations are much simpler to prototype than complex diseases like cancer. These require intricate biological models that are far harder to make or acquire. The result is a bottleneck in the number of drugs that can be tested, and thus the usefulness of phenotypic screening.

Now, a team of scientists led by the Shalek Lab at MIT has developed a promising new way to address the difficulty of applying phenotyping screening to scale. Their method allows researchers to simultaneously apply multiple drugs to a biological problem at once, and then computationally work backward to figure out the individual effects of each. For instance, when the team applied this method to models of pancreatic cancer and human immune cells, they were able to uncover surprising new biological insights, while also minimizing cost and sample requirements by several-fold — solving a few problems in scientific research at once.

Zev Gartner, a professor in pharmaceutical chemistry at the University of California at San Francisco, says this new method has great potential. “I think if there is a strong phenotype one is interested in, this will be a very powerful approach,” Gartner says.

The research was published Oct. 8 in Nature Biotechnology. It was led by Ivy Liu, Walaa Kattan, Benjamin Mead, Conner Kummerlowe, and Alex K. Shalek, the director of the Institute for Medical Engineering and Sciences (IMES) and the Health Innovation Hub at MIT, as well as the J. W. Kieckhefer Professor in IMES and the Department of Chemistry. It was supported by the National Institutes of Health and the Bill and Melinda Gates Foundation.

A “crazy” way to increase scale

Technological advances over the past decade have revolutionized our understanding of the inner lives of individual cells, setting the stage for richer phenotypic screens. However, many challenges remain.

For one, biologically representative models like organoids and primary tissues are only available in limited quantities. The most informative tests, like single-cell RNA sequencing, are also expensive, time-consuming, and labor-intensive.

That’s why the team decided to test out the “bold, maybe even crazy idea” to mix everything together, says Liu, a PhD student in the MIT Computational and Systems Biology program. In other words, they chose to combine many perturbations — things like drugs, chemical molecules, or biological compounds made by cells — into one single concoction, and then try to decipher their individual effects afterward.

They began testing their workflow by making different combinations of 316 U.S. Food and Drug Administration-approved drugs. “It’s a high bar: basically, the worst-case scenario,” says Liu. “Since every drug is known to have a strong effect, the signals could have been impossible to disentangle.”

These random combinations ranged from three to 80 drugs per pool, each of which was applied to lab-grown cells. The team then tried to understand the effects of the individual drug using a linear computational model.

It was a success. When compared with traditional tests for each individual drug, the new method yielded comparable results, successfully finding the strongest drugs and their respective effects in each pool, at a fraction of the cost, samples, and effort.

Putting it into practice

To test the method’s applicability to address real-world health challenges, the team then approached two problems that were previously unimaginable with past phenotypic screening techniques.

The first test focused on pancreatic ductal adenocarcinoma (PDAC), one of the deadliest types of cancer. In PDAC, many types of signals come from the surrounding cells in the tumor's environment. These signals can influence how the tumor progresses and responds to treatments. So, the team wanted to identify the most important ones.

Using their new method to pool different signals in parallel, they found several surprise candidates. “We never could have predicted some of our hits,” says Shalek. These included two previously overlooked cytokines that actually could predict survival outcomes of patients with PDAC in public cancer data sets.

The second test looked at the effects of 90 drugs on adjusting the immune system’s function. These drugs were applied to fresh human blood cells, which contain a complex mix of different types of immune cells. Using their new method and single-cell RNA-sequencing, the team could not only test a large library of drugs, but also separate the drugs’ effects out for each type of cell. This enabled the team to understand how each drug might work in a more complex tissue, and then select the best one for the job.

“We might say there’s a defect in a T cell, so we’re going to add this drug, but we never think about, well, what does that drug do to all of the other cells in the tissue?” says Shalek. “We now have a way to gather this information, so that we can begin to pick drugs to maximize on-target effects and minimize side effects.”

Together, these experiments also showed Shalek the need to build better tools and datasets for creating hypotheses about potential treatments. “The complexity and lack of predictability for the responses we saw tells me that we likely are not finding the right, or most effective, drugs in many instances,” says Shalek.

Reducing barriers and improving lives

Although the current compression technique can identify the perturbations with the greatest effects, it’s still unable to perfectly resolve the effects of each one. Therefore, the team recommends that it act as a supplement to support additional screening. “Traditional tests that examine the top hits should follow,” Liu says.

Importantly, however, the new compression framework drastically reduces the number of input samples, costs, and labor required to execute a screen. With fewer barriers in play, it marks an exciting advance for understanding complex responses in different cells and building new models for precision medicine.

Shalek says, “This is really an incredible approach that opens up the kinds of things that we can do to find the right targets, or the right drugs, to use to improve lives for patients.”


Astronomers detect ancient lonely quasars with murky origins

The quasars appear to have few cosmic neighbors, raising questions about how they first emerged more than 13 billion years ago.


A quasar is the extremely bright core of a galaxy that hosts an active supermassive black hole at its center. As the black hole draws in surrounding gas and dust, it blasts out an enormous amount of energy, making quasars some of the brightest objects in the universe. Quasars have been observed as early as a few hundred million years after the Big Bang, and it’s been a mystery as to how these objects could have grown so bright and massive in such a short amount of cosmic time.

Scientists have proposed that the earliest quasars sprang from overly dense regions of primordial matter, which would also have produced many smaller galaxies in the quasars’ environment. But in a new MIT-led study, astronomers observed some ancient quasars that appear to be surprisingly alone in the early universe.

The astronomers used NASA’s James Webb Space Telescope (JWST) to peer back in time, more than 13 billion years, to study the cosmic surroundings of five known ancient quasars. They found a surprising variety in their neighborhoods, or “quasar fields.” While some quasars reside in very crowded fields with more than 50 neighboring galaxies, as all models predict, the remaining quasars appear to drift in voids, with only a few stray galaxies in their vicinity.

These lonely quasars are challenging physicists’ understanding of how such luminous objects could have formed so early on in the universe, without a significant source of surrounding matter to fuel their black hole growth.

“Contrary to previous belief, we find on average, these quasars are not necessarily in those highest-density regions of the early universe. Some of them seem to be sitting in the middle of nowhere,” says Anna-Christina Eilers, assistant professor of physics at MIT. “It’s difficult to explain how these quasars could have grown so big if they appear to have nothing to feed from.”

There is a possibility that these quasars may not be as solitary as they appear, but are instead surrounded by galaxies that are heavily shrouded in dust and therefore hidden from view. Eilers and her colleagues hope to tune their observations to try and see through any such cosmic dust, in order to understand how quasars grew so big, so fast, in the early universe.

Eilers and her colleagues report their findings in a paper appearing today in the Astrophysical JournalThe MIT co-authors include postdocs Rohan Naidu and Minghao Yue; Robert Simcoe, the Francis Friedman Professor of Physics and director of MIT’s Kavli Institute for Astrophysics and Space Research; and collaborators from institutions including Leiden University, the University of California at Santa Barbara, ETH Zurich, and elsewhere.

Galactic neighbors

The five newly observed quasars are among the oldest quasars observed to date. More than 13 billion years old, the objects are thought to have formed between 600 to 700 million years after the Big Bang. The supermassive black holes powering the quasars are a billion times more massive than the sun, and more than a trillion times brighter. Due to their extreme luminosity, the light from each quasar is able to travel over the age of the universe, far enough to reach JWST’s highly sensitive detectors today.

“It’s just phenomenal that we now have a telescope that can capture light from 13 billion years ago in so much detail,” Eilers says. “For the first time, JWST enabled us to look at the environment of these quasars, where they grew up, and what their neighborhood was like.”

The team analyzed images of the five ancient quasars taken by JWST between August 2022 and June 2023. The observations of each quasar comprised multiple “mosaic” images, or partial views of the quasar’s field, which the team effectively stitched together to produce a complete picture of each quasar’s surrounding neighborhood.

The telescope also took measurements of light in multiple wavelengths across each quasar’s field, which the team then processed to determine whether a given object in the field was light from a neighboring galaxy, and how far a galaxy is from the much more luminous central quasar.

“We found that the only difference between these five quasars is that their environments look so different,” Eilers says. “For instance, one quasar has almost 50 galaxies around it, while another has just two. And both quasars are within the same size, volume, brightness, and time of the universe. That was really surprising to see.”

Growth spurts

The disparity in quasar fields introduces a kink in the standard picture of black hole growth and galaxy formation. According to physicists’ best understanding of how the first objects in the universe emerged, a cosmic web of dark matter should have set the course. Dark matter is an as-yet unknown form of matter that has no other interactions with its surroundings other than through gravity.

Shortly after the Big Bang, the early universe is thought to have formed filaments of dark matter that acted as a sort of gravitational road, attracting gas and dust along its tendrils. In overly dense regions of this web, matter would have accumulated to form more massive objects. And the brightest, most massive early objects, such as quasars, would have formed in the web’s highest-density regions, which would have also churned out many more, smaller galaxies.

“The cosmic web of dark matter is a solid prediction of our cosmological model of the Universe, and it can be described in detail using numerical simulations,” says co-author Elia Pizzati, a graduate student at Leiden University. “By comparing our observations to these simulations, we can determine where in the cosmic web quasars are located.”

Scientists estimate that quasars would have had to grow continuously with very high accretion rates in order to reach the extreme mass and luminosities at the times that astronomers have observed them, fewer than 1 billion years after the Big Bang.

“The main question we’re trying to answer is, how do these billion-solar-mass black holes form at a time when the universe is still really, really young? It’s still in its infancy,” Eilers says.

The team’s findings may raise more questions than answers. The “lonely” quasars appear to live in relatively empty regions of space. If physicists’ cosmological models are correct, these barren regions signify very little dark matter, or starting material for brewing up stars and galaxies. How, then, did extremely bright and massive quasars come to be?

“Our results show that there’s still a significant piece of the puzzle missing of how these supermassive black holes grow,” Eilers says. “If there’s not enough material around for some quasars to be able to grow continuously, that means there must be some other way that they can grow, that we have yet to figure out.”

This research was supported, in part, by the European Research Council. 


An exotic-materials researcher with the soul of an explorer

Associate professor of physics Riccardo Comin never stops seeking uncharted territory.


Riccardo Comin says the best part of his job as a physics professor and exotic-materials researcher is when his students come into his office to tell him they have new, interesting data.

“It’s that moment of discovery, that moment of awe, of revelation of something that’s outside of anything you know,” says Comin, the Class of 1947 Career Development Associate Professor of Physics. “That’s what makes it all worthwhile.”

Intriguing data energizes Comin because it can potentially grant access to an unexplored world. His team has discovered materials with quantum and other exotic properties, which could find a range of applications, such as handling the world’s exploding quantities of data, more precise medical imaging, and vastly increased energy efficiency — to name just a few. For Comin, who has always been somewhat of an explorer, new discoveries satisfy a kind of intellectual wanderlust.

As a small child growing up in the city of Udine in northeast Italy, Comin loved geography and maps, even drawing his own of imaginary cities and countries. He traveled literally, too, touring Europe with his parents; his father was offered free train travel as a project manager on large projects for Italian railroads.

Comin also loved numbers from an early age, and by about eighth grade would go to the public library to delve into math textbooks about calculus and analytical geometry that were far beyond what he was being taught in school. Later, in high school, Comin enjoyed being challenged by a math and physics teacher who in class would ask him questions about extremely advanced concepts.

“My classmates were looking at me like I was an alien, but I had a lot of fun,” Comin says.

Unafraid to venture alone into more rarefied areas of study, Comin nonetheless sought community, and appreciated the rapport he had with his teacher.

“He gave me the kind of interaction I was looking for, because otherwise it would have been just me and my books,” Comin says. “He helped transform an isolated activity into a social one. He made me feel like I had a buddy.”

By the end of his undergraduate studies at the University of Trieste, Comin says he decided on experimental physics, to have “the opportunity to explore and observe physical phenomena.”

He visited a nearby research facility that houses the Elettra Synchrotron to look for a research position where he could work on his undergraduate thesis, and became interested in all of the materials science research being conducted there. Drawn to community as well as the research, he chose a group that was investigating how the atoms and molecules in a liquid can rearrange themselves to become a glass.

“This one group struck me. They seemed to really enjoy what they were doing, and they had fun outside of work and enjoyed the outdoors,” Comin says. “They seemed to be a nice group of people to be part of. I think I cared more about the social environment than the specific research topic.”

By the time Comin was finishing his master’s, also in Trieste, and wanted to get a PhD, his focus had turned to electrons inside a solid rather than the behavior of atoms and molecules. Having traveled “literally almost everywhere in Europe,” Comin says he wanted to experience a different research environment outside of Europe.

He told his academic advisor he wanted to go to North America and was connected with Andrea Damascelli, the Canada Research Chair in Electronic Structure of Quantum Materials at the University of British Columbia, who was working on high-temperature superconductors. Comin says he was fascinated by the behavior of the electrons in the materials Damascelli and his group were studying.

“It’s almost like a quantum choreography, particles that dance together” rather than moving in many different directions, Comin says.

Comin’s subsequent postdoctoral work at the University of Toronto, focusing on optoelectronic materials — which can interact with photons and electrical energy — ignited his passion for connecting a material’s properties to its functionality and bridging the gap between fundamental physics and real-world applications.

Since coming to MIT in 2016, Comin has continued to delight in the behavior of electrons. He and Joe Checkelsky, associate professor of physics, had a breakthrough with a new class of materials in which electrons, very atypically, are nearly stationary.

Such materials could be used to explore zero energy loss, such as from power lines, and new approaches to quantum computing.

“It’s a very peculiar state of matter,” says Comin. “Normally, electrons are just zapping around. If you put an electron in a crystalline environment, what that electron will want to do is hop around, explore its neighbors, and basically be everywhere at the same time.”

The more sedentary electrons occurred in materials where a structure of interlaced triangles and hexagons tended to trap the electrons on the hexagons and, because the electrons all have the same energy, they create what’s called an electronic flat band, referring to the pattern that is created when they are measured. Their existence was predicted theoretically, but they had not been observed.

Comin says he and his colleagues made educated guesses on where to find flat bands, but they were elusive. After three years of research, however, they had a breakthrough.

“We put a sample material in an experimental chamber, we aligned the sample to do the experiment and started the measurement and, literally, five to 10 minutes later, we saw this beautiful flat band on the screen,” Comin says. “It was so clear, like this thing was basically screaming, How could you not find me before?

“That started off a whole area of research that is growing and growing — and a new direction in our field.”

Comin’s later research into certain two-dimensional materials with the thickness of single atoms and an internal structural feature of chirality, or right-handedness or left-handedness similar to how a spiral has a twist in one direction or the other, has yielded another new realm to explore.

By controlling the chirality, “there are interesting prospects of realizing a whole new class of devices” that could store information in a way that’s more robust and much more energy-efficient than current methods, says Comin, who is affiliated with MIT’s Materials Research Laboratory. Such devices would be especially valuable as the amount of data available generally and technologies like artificial intelligence grow exponentially.

While investigating these previously unknown properties of certain materials, Comin is characteristically adventurous in his pursuit.

“I embrace the randomness that nature throws at you,” he says. “It appears random, but there could be something behind it, so we try variations, switch things around, see what nature serves you. Much of what we discover is due to luck — and the rest boils down to a mix of knowledge and intuition to recognize when we’re seeing something new, something that’s worth exploring.”


Q&A: How the Europa Clipper will set cameras on a distant icy moon

MIT Research Scientist Jason Soderblom describes how the NASA mission will study the geology and composition of the surface of Jupiter’s water-rich moon and assess its astrobiological potential.


With its latest space mission successfully launched, NASA is set to return for a close-up investigation of Jupiter’s moon Europa. Yesterday at 12:06 p.m. EDT, the Europa Clipper lifted off via SpaceX Falcon Heavy rocket on a mission that will take a close look at Europa’s icy surface. Five years from now, the spacecraft will visit the moon, which hosts a water ocean covered by a water-ice shell. The spacecraft’s mission is to learn more about the composition and geology of the moon’s surface and interior and to assess its astrobiological potential. Because of Jupiter’s intense radiation environment, Europa Clipper will conduct a series of flybys, with its closest approach bringing it within just 16 miles of Europa’s surface. 

MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS) Research Scientist Jason Soderblom is a co-investigator on two of the spacecraft’s instruments: the Europa Imaging System and the Mapping Imaging Spectrometer for Europa. Over the past nine years, he and his fellow team members have been building imaging and mapping instruments to study Europa’s surface in detail to gain a better understanding of previously seen geologic features, as well as the chemical composition of the materials that are present. Here, he describes the mission's primary plans and goals.

Q: What do we currently know about Europa’s surface?

A: We know from NASA Galileo mission data that the surface crust is relatively thin, but we don’t know how thin it is. One of the goals of the Europa Clipper mission is to measure the thickness of that ice shell. The surface is riddled with fractures that indicate tectonism is actively resurfacing the moon. Its crust is primarily composed of water ice, but there are also exposures of non-ice material along these fractures and ridges that we believe include material coming up from within Europa.

One of the things that makes investigating the materials on the surface more difficult is the environment. Jupiter is a significant source of radiation, and Europa is relatively close to Jupiter. That radiation modifies the materials on the surface; understanding that radiation damage is a key component to understanding the composition.

This is also what drives the clipper-style mission and gives the mission its name: we clip by Europa, collect data, and then spend the majority of our time outside of the radiation environment. That allows us time to download the data, analyze it, and make plans for the next flyby.

Q: Did that pose a significant challenge when it came to instrument design?

A: Yes, and this is one of the reasons that we're just now returning to do this mission. The concept of this mission came about around the time of the Galileo mission in the late 1990s, so it's been roughly 25 years since scientists first wanted to carry out this mission. A lot of that time has been figuring out how to deal with the radiation environment.

There's a lot of tricks that we've been developing over the years. The instruments are heavily shielded, and lots of modeling has gone into figuring exactly where to put that shielding. We've also developed very specific techniques to collect data. For example, by taking a whole bunch of short observations, we can look for the signature of this radiation noise, remove it from the little bits of data here and there, add the good data together, and end up with a low-radiation-noise observation.

Q: You're involved with the two different imaging and mapping instruments: the Europa Imaging System (EIS) and the Mapping Imaging Spectrometer for Europa (MISE). How are they different from each other?

A: The camera system [EIS] is primarily focused on understanding the physics and the geology that's driving processes on the surface, looking for: fractured zones; regions that we refer to as chaos terrain, where it looks like icebergs have been suspended in a slurry of water and have jumbled around and mixed and twisted; regions where we believe the surface is colliding and subduction is occurring, so one section of the surface is going beneath the other; and other regions that are spreading, so new surface is being created like our mid-ocean ridges on Earth.

The spectrometer’s [MISE] primary function is to constrain the composition of the surface. In particular, we're really interested in sections where we think liquid water might have come to the surface. Understanding what material is from within Europa and what material is being deposited from external sources is also important, and separating that is necessary to understand the composition of those coming from Europa and using that to learn about the composition of the subsurface ocean.

There is an intersection between those two, and that's my interest in the mission. We have color imaging with our imaging system that can provide some crude understanding of the composition, and there is a mapping component to our spectrometer that allows us to understand how the materials that we're detecting are physically distributed and correlate with the geology. So there's a way to examine the intersection of those two disciplines — to extrapolate the compositional information derived from the spectrometer to much higher resolutions using the camera, and to extrapolate the geological information that we learn from the camera to the compositional constraints from the spectrometer.

Q: How do those mission goals align with the research that you've been doing here at MIT?

A: One of the other major missions that I've been involved with was the Cassini mission, primarily working with the Visual and Infrared Spectrometer team to understand the geology and composition of Saturn's moon Titan. That instrument is very similar to the MISE instrument, both in function and in science objective, and so there's a very strong connection between that and the Europa Clipper mission. For another mission, for which I’m leading the camera team, is working to retrieve a sample of a comet, and my primary function on that mission is understanding the geology of the cometary surface.

Q: What are you most excited about learning from the Europa Clipper mission?

A: I'm most fascinated with some of these very unique geologic features that we see on the surface of Europa, understanding the composition of the material that is involved, and the processes that are driving those features. In particular, the chaos terrains and the fractures that we see on the surface.

Q: It's going to be a while before the spacecraft finally reaches Europa. What work needs to be done in the meantime?

A: A key component of this mission will be the laboratory work here on Earth, expanding our spectral libraries so that when we collect a spectrum of Europa's surface, we can compare that to laboratory measurements. We are also in the process of developing a number of models to allow us to, for example, understand how a material might process and change starting in the ocean and working its way up through fractures and eventually to the surface. Developing these models now is an important piece before we collect these data, then we can make corrections and get improved observations as the mission progresses. Making the best and most efficient use of the spacecraft resources requires an ability to reprogram and refine observations in real-time.


Model reveals why debunking election misinformation often doesn’t work

The new study also identifies factors that can make these efforts more successful.


When an election result is disputed, people who are skeptical about the outcome may be swayed by figures of authority who come down on one side or the other. Those figures can be independent monitors, political figures, or news organizations. However, these “debunking” efforts don’t always have the desired effect, and in some cases, they can lead people to cling more tightly to their original position.

Neuroscientists and political scientists at MIT and the University of California at Berkeley have now created a computational model that analyzes the factors that help to determine whether debunking efforts will persuade people to change their beliefs about the legitimacy of an election. Their findings suggest that while debunking fails much of the time, it can be successful under the right conditions.

For instance, the model showed that successful debunking is more likely if people are less certain of their original beliefs and if they believe the authority is unbiased or strongly motivated by a desire for accuracy. It also helps when an authority comes out in support of a result that goes against a bias they are perceived to hold: for example, Fox News declaring that Joseph R. Biden had won in Arizona in the 2020 U.S. presidential election.

“When people see an act of debunking, they treat it as a human action and understand it the way they understand human actions — that is, as something somebody did for their own reasons,” says Rebecca Saxe, the John W. Jarve Professor of Brain and Cognitive Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study. “We’ve used a very simple, general model of how people understand other people’s actions, and found that that’s all you need to describe this complex phenomenon.”

The findings could have implications as the United States prepares for the presidential election taking place on Nov. 5, as they help to reveal the conditions that would be most likely to result in people accepting the election outcome.

MIT graduate student Setayesh Radkani is the lead author of the paper, which appears today in a special election-themed issue of the journal PNAS Nexus. Marika Landau-Wells PhD ’18, a former MIT postdoc who is now an assistant professor of political science at the University of California at Berkeley, is also an author of the study.

Modeling motivation

In their work on election debunking, the MIT team took a novel approach, building on Saxe’s extensive work studying “theory of mind” — how people think about the thoughts and motivations of other people.

As part of her PhD thesis, Radkani has been developing a computational model of the cognitive processes that occur when people see others being punished by an authority. Not everyone interprets punitive actions the same way, depending on their previous beliefs about the action and the authority. Some may see the authority as acting legitimately to punish an act that was wrong, while others may see an authority overreaching to issue an unjust punishment.

Last year, after participating in an MIT workshop on the topic of polarization in societies, Saxe and Radkani had the idea to apply the model to how people react to an authority attempting to sway their political beliefs. They enlisted Landau-Wells, who received her PhD in political science before working as a postdoc in Saxe’s lab, to join their effort, and Landau suggested applying the model to debunking of beliefs regarding the legitimacy of an election result.

The computational model created by Radkani is based on Bayesian inference, which allows the model to continually update its predictions of people’s beliefs as they receive new information. This approach treats debunking as an action that a person undertakes for his or her own reasons. People who observe the authority’s statement then make their own interpretation of why the person said what they did. Based on that interpretation, people may or may not change their own beliefs about the election result.

Additionally, the model does not assume that any beliefs are necessarily incorrect or that any group of people is acting irrationally.

“The only assumption that we made is that there are two groups in the society that differ in their perspectives about a topic: One of them thinks that the election was stolen and the other group doesn’t,” Radkani says. “Other than that, these groups are similar. They share their beliefs about the authority — what the different motives of the authority are and how motivated the authority is by each of those motives.”

The researchers modeled more than 200 different scenarios in which an authority attempts to debunk a belief held by one group regarding the validity of an election outcome.

Each time they ran the model, the researchers altered the certainty levels of each group’s original beliefs, and they also varied the groups’ perceptions of the motivations of the authority. In some cases, groups believed the authority was motivated by promoting accuracy, and in others they did not. The researchers also altered the groups’ perceptions of whether the authority was biased toward a particular viewpoint, and how strongly the groups believed in those perceptions.

Building consensus

In each scenario, the researchers used the model to predict how each group would respond to a series of five statements made by an authority trying to convince them that the election had been legitimate. The researchers found that in most of the scenarios they looked at, beliefs remained polarized and in some cases became even further polarized. This polarization could also extend to new topics unrelated to the original context of the election, the researchers found.

However, under some circumstances, the debunking was successful, and beliefs converged on an accepted outcome. This was more likely to happen when people were initially more uncertain about their original beliefs.

“When people are very, very certain, they become hard to move. So, in essence, a lot of this authority debunking doesn’t matter,” Landau-Wells says. “However, there are a lot of people who are in this uncertain band. They have doubts, but they don’t have firm beliefs. One of the lessons from this paper is that we’re in a space where the model says you can affect people’s beliefs and move them towards true things.”

Another factor that can lead to belief convergence is if people believe that the authority is unbiased and highly motivated by accuracy. Even more persuasive is when an authority makes a claim that goes against their perceived bias — for instance, Republican governors stating that elections in their states had been fair even though the Democratic candidate won.

As the 2024 presidential election approaches, grassroots efforts have been made to train nonpartisan election observers who can vouch for whether an election was legitimate. These types of organizations may be well-positioned to help sway people who might have doubts about the election’s legitimacy, the researchers say.

“They’re trying to train to people to be independent, unbiased, and committed to the truth of the outcome more than anything else. Those are the types of entities that you want. We want them to succeed in being seen as independent. We want them to succeed as being seen as truthful, because in this space of uncertainty, those are the voices that can move people toward an accurate outcome,” Landau-Wells says.

The research was funded, in part, by the Patrick J. McGovern Foundation and the Guggenheim Foundation.


Tiny magnetic discs offer remote brain stimulation without transgenes

The devices could be a useful tool for biomedical research, and possible clinical use in the future.


Novel magnetic nanodiscs could provide a much less invasive way of stimulating parts of the brain, paving the way for stimulation therapies without implants or genetic modification, MIT researchers report.

The scientists envision that the tiny discs, which are about 250 nanometers across (about 1/500 the width of a human hair), would be injected directly into the desired location in the brain. From there, they could be activated at any time simply by applying a magnetic field outside the body. The new particles could quickly find applications in biomedical research, and eventually, after sufficient testing, might be applied to clinical uses.

The development of these nanoparticles is described in the journal Nature Nanotechnology, in a paper by Polina Anikeeva, a professor in MIT’s departments of Materials Science and Engineering and Brain and Cognitive Sciences, graduate student Ye Ji Kim, and 17 others at MIT and in Germany.

Deep brain stimulation (DBS) is a common clinical procedure that uses electrodes implanted in the target brain regions to treat symptoms of neurological and psychiatric conditions such as Parkinson’s disease and obsessive-compulsive disorder. Despite its efficacy, the surgical difficulty and clinical complications associated with DBS limit the number of cases where such an invasive procedure is warranted. The new nanodiscs could provide a much more benign way of achieving the same results.

Over the past decade other implant-free methods of producing brain stimulation have been developed. However, these approaches were often limited by their spatial resolution or ability to target deep regions. For the past decade, Anikeeva’s Bioelectronics group as well as others in the field used magnetic nanomaterials to transduce remote magnetic signals into brain stimulation. However, these magnetic methods relied on genetic modifications and can’t be used in humans.

Since all nerve cells are sensitive to electrical signals, Kim, a graduate student in Anikeeva’s group, hypothesized that a magnetoelectric nanomaterial that can efficiently convert magnetization into electrical potential could offer a path toward remote magnetic brain stimulation. Creating a nanoscale magnetoelectric material was, however, a formidable challenge.

Kim synthesized novel magnetoelectric nanodiscs and collaborated with Noah Kent, a postdoc in Anikeeva’s lab with a background in physics who is a second author of the study, to understand the properties of these particles.

The structure of the new nanodiscs consists of a two-layer magnetic core and a piezoelectric shell. The magnetic core is magnetostrictive, which means it changes shape when magnetized. This deformation then induces strain in the piezoelectric shell which produces a varying electrical polarization. Through the combination of the two effects, these composite particles can deliver electrical pulses to neurons when exposed to magnetic fields.

One key to the discs’ effectiveness is their disc shape. Previous attempts to use magnetic nanoparticles had used spherical particles, but the magnetoelectric effect was very weak, says Kim. This anisotropy enhances magnetostriction by over a 1000-fold, adds Kent.

The team first added their nanodiscs to cultured neurons, which allowed then to activate these cells on demand with short pulses of magnetic field. This stimulation did not require any genetic modification.

They then injected small droplets of magnetoelectric nanodiscs solution into specific regions of the brains of mice. Then, simply turning on a relatively weak electromagnet nearby triggered the particles to release a tiny jolt of electricity in that brain region. The stimulation could be switched on and off remotely by the switching of the electromagnet. That electrical stimulation “had an impact on neuron activity and on behavior,” Kim says.

The team found that the magnetoelectric nanodiscs could stimulate a deep brain region, the ventral tegmental area, that is associated with feelings of reward.

The team also stimulated another brain area, the subthalamic nucleus, associated with motor control. “This is the region where electrodes typically get implanted to manage Parkinson’s disease,” Kim explains. The researchers were able to successfully demonstrate the modulation of motor control through the particles. Specifically, by injecting nanodiscs only in one hemisphere, the researchers could induce rotations in healthy mice by applying magnetic field.

The nanodiscs could trigger the neuronal activity comparable with conventional implanted electrodes delivering mild electrical stimulation. The authors achieved subsecond temporal precision for neural stimulation with their method yet observed significantly reduced foreign body responses as compared to the electrodes, potentially allowing for even safer deep brain stimulation.

The multilayered chemical composition and physical shape and size of the new multilayered nanodiscs is what made precise stimulation possible.

While the researchers successfully increased the magnetostrictive effect, the second part of the process, converting the magnetic effect into an electrical output, still needs more work, Anikeeva says. While the magnetic response was a thousand times greater, the conversion to an electric impulse was only four times greater than with conventional spherical particles.

“This massive enhancement of a thousand times didn’t completely translate into the magnetoelectric enhancement,” says Kim. “That’s where a lot of the future work will be focused, on making sure that the thousand times amplification in magnetostriction can be converted into a thousand times amplification in the magnetoelectric coupling.”

What the team found, in terms of the way the particles’ shapes affects their magnetostriction, was quite unexpected. “It’s kind of a new thing that just appeared when we tried to figure out why these particles worked so well,” says Kent.

Anikeeva adds: “Yes, it’s a record-breaking particle, but it’s not as record-breaking as it could be.” That remains a topic for further work, but the team has ideas about how to make further progress.

While these nanodiscs could in principle already be applied to basic research using animal models, to translate them to clinical use in humans would require several more steps, including large-scale safety studies, “which is something academic researchers are not necessarily most well-positioned to do,” Anikeeva says. “When we find that these particles are really useful in a particular clinical context, then we imagine that there will be a pathway for them to undergo more rigorous large animal safety studies.”

The team included researchers affiliated with MIT’s departments of Materials Science and Engineering, Electrical Engineering and Computer Science, Chemistry, and Brain and Cognitive Sciences; the Research Laboratory of Electronics; the McGovern Institute for Brain Research; and the Koch Institute for Integrative Cancer Research; and from the Friedrich-Alexander University of Erlangen, Germany. The work was supported, in part, by the National Institutes of Health, the National Center for Complementary and Integrative Health, the National Institute for Neurological Disorders and Stroke, the McGovern Institute for Brain Research, and the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics in Neuroscience.


A new method makes high-resolution imaging more accessible

Labs that can’t afford expensive super-resolution microscopes could use a new expansion technique to image nanoscale structures inside cells.


A classical way to image nanoscale structures in cells is with high-powered, expensive super-resolution microscopes. As an alternative, MIT researchers have developed a way to expand tissue before imaging it — a technique that allows them to achieve nanoscale resolution with a conventional light microscope.

In the newest version of this technique, the researchers have made it possible to expand tissue 20-fold in a single step. This simple, inexpensive method could pave the way for nearly any biology lab to perform nanoscale imaging.

“This democratizes imaging,” says Laura Kiessling, the Novartis Professor of Chemistry at MIT and a member of the Broad Institute of MIT and Harvard and MIT’s Koch Institute for Integrative Cancer Research. “Without this method, if you want to see things with a high resolution, you have to use very expensive microscopes. What this new technique allows you to do is see things that you couldn’t normally see with standard microscopes. It drives down the cost of imaging because you can see nanoscale things without the need for a specialized facility.”

At the resolution achieved by this technique, which is around 20 nanometers, scientists can see organelles inside cells, as well as clusters of proteins.

“Twenty-fold expansion gets you into the realm that biological molecules operate in. The building blocks of life are nanoscale things: biomolecules, genes, and gene products,” says Edward Boyden, the Y. Eva Tan Professor in Neurotechnology at MIT; a professor of biological engineering, media arts and sciences, and brain and cognitive sciences; a Howard Hughes Medical Institute investigator; and a member of MIT’s McGovern Institute for Brain Research and Koch Institute for Integrative Cancer Research.

Boyden and Kiessling are the senior authors of the new study, which appears today in Nature Methods. MIT graduate student Shiwei Wang and Tay Won Shin PhD ’23 are the lead authors of the paper.

A single expansion

Boyden’s lab invented expansion microscopy in 2015. The technique requires embedding tissue into an absorbent polymer and breaking apart the proteins that normally hold tissue together. When water is added, the gel swells and pulls biomolecules apart from each other.

The original version of this technique, which expanded tissue about fourfold, allowed researchers to obtain images with a resolution of around 70 nanometers. In 2017, Boyden’s lab modified the process to include a second expansion step, achieving an overall 20-fold expansion. This enables even higher resolution, but the process is more complicated.

“We’ve developed several 20-fold expansion technologies in the past, but they require multiple expansion steps,” Boyden says. “If you could do that amount of expansion in a single step, that could simplify things quite a bit.”

With 20-fold expansion, researchers can get down to a resolution of about 20 nanometers, using a conventional light microscope. This allows them see cell structures like microtubules and mitochondria, as well as clusters of proteins.

In the new study, the researchers set out to perform 20-fold expansion with only a single step. This meant that they had to find a gel that was both extremely absorbent and mechanically stable, so that it wouldn’t fall apart when expanded 20-fold.

To achieve that, they used a gel assembled from N,N-dimethylacrylamide (DMAA) and sodium acrylate. Unlike previous expansion gels that rely on adding another molecule to form crosslinks between the polymer strands, this gel forms crosslinks spontaneously and exhibits strong mechanical properties. Such gel components previously had been used in expansion microscopy protocols, but the resulting gels could expand only about tenfold. The MIT team optimized the gel and the polymerization process to make the gel more robust, and to allow for 20-fold expansion.

To further stabilize the gel and enhance its reproducibility, the researchers removed oxygen from the polymer solution prior to gelation, which prevents side reactions that interfere with crosslinking. This step requires running nitrogen gas through the polymer solution, which replaces most of the oxygen in the system.

Once the gel is formed, select bonds in the proteins that hold the tissue together are broken and water is added to make the gel expand. After the expansion is performed, target proteins in tissue can be labeled and imaged.

“This approach may require more sample preparation compared to other super-resolution techniques, but it’s much simpler when it comes to the actual imaging process, especially for 3D imaging,” Shin says. “We document the step-by-step protocol in the manuscript so that readers can go through it easily.”

Imaging tiny structures

Using this technique, the researchers were able to image many tiny structures within brain cells, including structures called synaptic nanocolumns. These are clusters of proteins that are arranged in a specific way at neuronal synapses, allowing neurons to communicate with each other via secretion of neurotransmitters such as dopamine.

In studies of cancer cells, the researchers also imaged microtubules — hollow tubes that help give cells their structure and play important roles in cell division. They were also able to see mitochondria (organelles that generate energy) and even the organization of individual nuclear pore complexes (clusters of proteins that control access to the cell nucleus).

Wang is now using this technique to image carbohydrates known as glycans, which are found on cell surfaces and help control cells’ interactions with their environment. This method could also be used to image tumor cells, allowing scientists to glimpse how proteins are organized within those cells, much more easily than has previously been possible.

The researchers envision that any biology lab should be able to use this technique at a low cost since it relies on standard, off-the-shelf chemicals and common equipment such confocal microscopes and glove bags, which most labs already have or can easily access.

“Our hope is that with this new technology, any conventional biology lab can use this protocol with their existing microscopes, allowing them to approach resolution that can only be achieved with very specialized and costly state-of-the-art microscopes,” Wang says.

The research was funded, in part, by the U.S. National Institutes of Health, an MIT Presidential Graduate Fellowship, U.S. National Science Foundation Graduate Research Fellowship grants, Open Philanthropy, Good Ventures, the Howard Hughes Medical Institute, Lisa Yang, Ashar Aziz, and the European Research Council.


The way sensory prediction changes under anesthesia tells us how conscious cognition works

A new study adds evidence that consciousness requires communication between sensory and cognitive regions of the brain’s cortex.


Our brains constantly work to make predictions about what’s going on around us to ensure that we can attend to and consider the unexpected, for instance. A new study examines how this works during consciousness and also breaks down under general anesthesia. The results add evidence to the idea that conscious thought requires synchronized communication — mediated by brain rhythms in specific frequency bands — between basic sensory and higher-order cognitive regions of the brain.

Previously, members of the research team in The Picower Institute for Learning and Memory at MIT and at Vanderbilt University had described how brain rhythms enable the brain to remain prepared to attend to surprises. Cognition-oriented brain regions (generally at the front of the brain) use relatively low-frequency alpha and beta rhythms to suppress processing by sensory regions (generally toward the back of the brain) of stimuli that have become familiar and mundane in the environment (e.g., your co-worker’s music). When sensory regions detect a surprise (e.g., the office fire alarm), they use faster-frequency gamma rhythms to tell the higher regions about it, and the higher regions process that at gamma frequencies to decide what to do (e.g., exit the building).

The new results, published Oct. 7 in the Proceedings of the National Academy of Sciences, show that when animals were under propofol-induced general anesthesia, a sensory region retained the capacity to detect simple surprises but communication with a higher cognitive region toward the front of the brain was lost, making that region unable to engage in its “top-down” regulation of the activity of the sensory region and keeping it oblivious to simple and more complex surprises alike.

What we've got here is failure to communicate

“What we are doing here speaks to the nature of consciousness,” says co-senior author Earl K. Miller, Picower Professor in The Picower Institute for Learning and Memory and MIT’s Department of Brain and Cognitive Sciences. “Propofol general anesthesia deactivates the top-down processes that that underlie cognition. It essentially disconnects communication between the front and back halves of the brain.”

Co-senior author Andre Bastos, an assistant professor in the psychology department at Vanderbilt and a former member of Miller’s MIT lab, adds that the study results highlight the key role of frontal areas in consciousness.

“These results are particularly important given the newfound scientific interest in the mechanisms of consciousness, and how consciousness relates to the ability of the brain to form predictions,” Bastos says.

The brain’s ability to predict is dramatically altered during anesthesia. It was interesting that the front of the brain, areas associated with cognition, were more strongly diminished in their predictive abilities than sensory areas. This suggests that prefrontal areas help to spark an “ignition” event that allows sensory information to become conscious. Sensory cortex activation by itself does not lead to conscious perception. These observations help us narrow down possible models for the mechanisms of consciousness.

Yihan Sophy Xiong, a graduate student in Bastos’ lab who led the study, says the anesthetic reduces the times in which inter-regional communication within the cortex can occur.

“In the awake brain, brain waves give short windows of opportunity for neurons to fire optimally — the ‘refresh rate’ of the brain, so to speak,” Xiong says. “This refresh rate helps organize different brain areas to communicate effectively. Anesthesia both slows down the refresh rate, which narrows these time windows for brain areas to talk to each other and makes the refresh rate less effective, so that neurons become more disorganized about when they can fire. When the refresh rate no longer works as intended, our ability to make predictions is weakened.”

Learning from oddballs

To conduct the research, the neuroscientists measured the electrical signals, “or spiking,” of hundreds of individual neurons and the coordinated rhythms of their aggregated activity (at alpha/beta and gamma frequencies), in two areas on the surface, or cortex, of the brain of two animals as they listened to sequences of tones. Sometimes the sequences would all be the same note (e.g., AAAAA). Sometimes there’d be a simple surprise that the researchers called a “local oddball” (e.g., AAAAB). But sometimes the surprise would be more complicated, or a “global oddball.” For example, after seeing a series of AAAABs, there’d all of a sudden be AAAAA, which violates the global but not the local pattern.

Prior work has suggested that a sensory region (in this case the temporoparietal area, or Tpt) can spot local oddballs on its own, Miller says. Detecting the more complicated global oddball requires the participation of a higher order region (in this case the frontal eye fields, or FEF).

The animals heard the tone sequences both while awake and while under propofol anesthesia. There were no surprises about the waking state. The researchers reaffirmed that top-down alpha/beta rhythms from FEF carried predictions to the Tpt and that Tpt would increase gamma rhythms when an oddball came up, causing FEF (and the prefrontal cortex) to respond with upticks of gamma activity as well.

But by several measures and analyses, the scientists could see these dynamics break down after the animals lost consciousness.

Under propofol, for instance, spiking activity declined overall but when a local oddball came along, Tpt spiking still increased notably but now spiking in FEF didn’t follow suit as it does during wakefulness.

Meanwhile, when a global oddball was presented during wakefulness, the researchers could use software to “decode” representation of that among neurons in FEF and the prefrontal cortex (another cognition-oriented region). They could also decode local oddballs in the Tpt. But under anesthesia the decoder could no longer reliably detect representation of local or global oddballs in FEF or the prefrontal cortex.

Moreover, when they compared rhythms in the regions amid wakeful versus unconscious states they found stark differences. When the animals were awake, oddballs increased gamma activity in both Tpt and FEF and alpha/beta rhythms decreased. Regular, non-oddball stimulation increased alpha/beta rhythms. But when the animals lost consciousness the increase in gamma rhythms from a local oddball was even greater in Tpt than when the animal was awake.

“Under propofol-mediated loss of consciousness, the inhibitory function of alpha/beta became diminished and/or eliminated, leading to disinhibition of oddballs in sensory cortex,” the authors wrote.

Other analyses of inter-region connectivity and synchrony revealed that the regions lost the ability to communicate during anesthesia.

In all, the study’s evidence suggests that conscious thought requires coordination across the cortex, from front to back, the researchers wrote.

“Our results therefore suggest an important role for prefrontal cortex activation, in addition to sensory cortex activation, for conscious perception,” the researchers wrote.

In addition to Xiong, Miller, and Bastos, the paper’s other authors are Jacob Donoghue, Mikael Lundqvist, Meredith Mahnke, Alex Major, and Emery N. Brown.

The National Institutes of Health, The JPB Foundation, and The Picower Institute for Learning and Memory funded the study.


Mixing joy and resolve, event celebrates women in science and addresses persistent inequalities

The Kuggie Vallee Distinguished Lectures and Workshops presented inspiring examples of success, even as the event evoked frank discussions of the barriers that still hinder many women in science.


For two days at The Picower Institute for Learning and Memory at MIT, participants in the Kuggie Vallee Distinguished Lectures and Workshops celebrated the success of women in science and shared strategies to persist through, or better yet dissipate, the stiff headwinds women still face in the field.

“Everyone is here to celebrate and to inspire and advance the accomplishments of all women in science,” said host Li-Huei Tsai, Picower Professor in the Department of Brain and Cognitive Sciences and director of the Picower Institute, as she welcomed an audience that included scores of students, postdocs, and other research trainees. “It is a great feeling to have the opportunity to showcase examples of our successes and to help lift up the next generation.”

Tsai earned the honor of hosting the event after she was named a Vallee Visiting Professor in 2022 by the Vallee Foundation. Foundation president Peter Howley, a professor of pathological anatomy at Harvard University, said the global series of lectureships and workshops were created to honor Kuggie Vallee, a former Lesley College professor who worked to advance the careers of women.

During the program Sept. 24-25, speakers and audience members alike made it clear that helping women succeed requires both recognizing their achievements and resolving to change social structures in which they face marginalization.

Inspiring achievements

Lectures on the first day featured two brain scientists who have each led acclaimed discoveries that have been transforming their fields.

Michelle Monje, a pediatric neuro-oncologist at Stanford University whose recognitions include a MacArthur Fellowship, described her lab’s studies of brain cancers in children, which emerge at specific times in development as young brains adapt to their world by wiring up new circuits and insulating neurons with a fatty sheathing called myelin. Monje has discovered that when the precursors to myelinating cells, called oligodendrocyte precursor cells, harbor cancerous mutations, the tumors that arise — called gliomas — can hijack those cellular and molecular mechanisms. To promote their own growth, gliomas tap directly into the electrical activity of neural circuits by forging functional neuron-to-cancer connections, akin to the “synapse” junctions healthy neurons make with each other. Years of her lab’s studies, often led by female trainees, have not only revealed this insidious behavior (and linked aberrant myelination to many other diseases as well), but also revealed specific molecular factors involved. Those findings, Monje said, present completely novel potential avenues for therapeutic intervention.

“This cancer is an electrically active tissue and that is not how we have been approaching understanding it,” she said.

Erin Schuman, who directs the Max Planck Institute for Brain Research in Frankfurt, Germany, and has won honors including the Brain Prize, described her groundbreaking discoveries related to how neurons form and edit synapses along the very long branches — axons and dendrites — that give the cells their exotic shapes. Synapses form very far from the cell body where scientists had long thought all proteins, including those needed for synapse structure and activity, must be made. In the mid-1990s, Schuman showed that the protein-making process can occur at the synapse and that neurons stage the needed infrastructure — mRNA and ribosomes — near those sites. Her lab has continued to develop innovative tools to build on that insight, cataloging the stunning array of thousands of mRNAs involved, including about 800 that are primarily translated at the synapse, studying the diversity of synapses that arise from that collection, and imaging individual ribosomes such that her lab can detect when they are actively making proteins in synaptic neighborhoods.

Persistent headwinds

While the first day’s lectures showcased examples of women’s success, the second day’s workshops turned the spotlight on the social and systemic hindrances that continue to make such achievements an uphill climb. Speakers and audience members engaged in frank dialogues aimed at calling out those barriers, overcoming them, and dismantling them.

Susan Silbey, the Leon and Anne Goldberg Professor of Humanities, Sociology and Anthropology at MIT and professor of behavioral and policy sciences in the MIT Sloan School of Management, told the group that as bad as sexual harassment and assault in the workplace are, the more pervasive, damaging, and persistent headwinds for women across a variety of professions are “deeply sedimented cultural habits” that marginalize their expertise and contributions in workplaces, rendering them invisible to male counterparts, even when they are in powerful positions. High-ranking women in Silicon Valley who answered the “Elephant in the Valley” survey, for instance, reported high rates of many demeaning comments and demeanor, as well as exclusion from social circles. Even U.S. Supreme Court justices are not immune, she noted, citing research showing that for decades female justices have been interrupted with disproportionate frequency during oral arguments at the court. Silbey’s research has shown that young women entering the engineering workforce often become discouraged by a system that appears meritocratic, but in which they are often excluded from opportunities to demonstrate or be credited for that merit and are paid significantly less.

“Women’s occupational inequality is a consequence of being ignored, having contributions overlooked or appropriated, of being assigned to lower-status roles, while men are pushed ahead, honored and celebrated, often on the basis of women’s work,” Silbey said.

Often relatively small in numbers, women in such workplaces become tokens — visible as different, but still treated as outsiders, Silbey said. Women tend to internalize this status, becoming very cautious about their work while some men surge ahead in more cavalier fashion. Silbey and speakers who followed illustrated the effect this can have on women’s careers in science. Kara McKinley, an assistant professor of stem cell and regenerative biology at Harvard, noted that while the scientific career “pipeline” in some areas of science is full of female graduate students and postdocs, only about 20 percent of natural sciences faculty positions are held by women. Strikingly, women are already significantly depleted in the applicant pools for assistant professor positions, she said. Those who do apply tend to wait until they are more qualified than the men they are competing against. 

McKinley and Silbey each noted that women scientists submit fewer papers to prestigious journals, with Silbey explaining that it’s often because women are more likely to worry that their studies need to tie up every loose end. Yet, said Stacie Weninger, a venture capitalist and president of the F-Prime Biomedical Research Initiative and a former editor at Cell Press, women were also less likely than men to rebut rejections from journal editors, thereby accepting the rejection even though rebuttals sometimes work.

Several speakers, including Weninger and Silbey, said pedagogy must change to help women overcome a social tendency to couch their assertions in caveats when many men speak with confidence and are therefore perceived as more knowledgeable.

At lunch, trainees sat in small groups with the speakers. They shared sometimes harrowing personal stories of gender-related difficulties in their young careers and sought advice on how to persist and remain resilient. Schuman advised the trainees to report mistreatment, even if they aren’t confident that university officials will be able to effect change, to at least make sure patterns of mistreatment get on the record. Reflecting on discouraging comments she experienced early in her career, Monje advised students to build up and maintain an inner voice of confidence and draw upon it when criticism is unfair.

“It feels terrible in the moment, but cream rises,” Monje said. “Believe in yourself. It will be OK in the end.”

Lifting each other up

Speakers at the conference shared many ideas to help overcome inequalities. McKinley described a program she launched in 2020 to ensure that a diversity of well-qualified women and non-binary postdocs are recruited for, and apply for, life sciences faculty jobs: the Leading Edge Symposium. The program identifies and names fellows — 200 so far — and provides career mentoring advice, a supportive community, and a platform to ensure they are visible to recruiters. Since the program began, 99 of the fellows have gone on to accept faculty positions at various institutions.

In a talk tracing the arc of her career, Weninger, who trained as a neuroscientist at Harvard, said she left bench work for a job as an editor because she wanted to enjoy the breadth of science, but also noted that her postdoc salary didn’t even cover the cost of child care. She left Cell Press in 2005 to help lead a task force on women in science that Harvard formed in the wake of comments by then-president Lawrence Summers widely understood as suggesting that women lacked “natural ability” in science and engineering. Working feverishly for months, the task force recommended steps to increase the number of senior women in science, including providing financial support for researchers who were also caregivers at home so they’d have the money to hire a technician. That extra set of hands would afford them the flexibility to keep research running even as they also attended to their families. Notably, Monje said she does this for the postdocs in her lab.

A graduate student asked Silbey at the end of her talk how to change a culture in which traditionally male-oriented norms marginalize women. Silbey said it starts with calling out those norms and recognizing that they are the issue, rather than increasing women’s representation in, or asking them to adapt to, existing systems.

“To make change, it requires that you do recognize the differences of the experiences and not try to make women exactly like men, or continue the past practices and think, ‘Oh, we just have to add women into it’,” she said.

Silbey also praised the Kuggie Vallee event at MIT for assembling a new community around these issues. Women in science need more social networks where they can exchange information and resources, she said.

“This is where an organ, an event like this, is an example of making just that kind of change: women making new networks for women,” she said.


Study finds mercury pollution from human activities is declining

Models show that an unexpected reduction in human-driven emissions led to a 10 percent decline in atmospheric mercury concentrations.


MIT researchers have some good environmental news: Mercury emissions from human activity have been declining over the past two decades, despite global emissions inventories that indicate otherwise.

In a new study, the researchers analyzed measurements from all available monitoring stations in the Northern Hemisphere and found that atmospheric concentrations of mercury declined by about 10 percent between 2005 and 2020.

They used two separate modeling methods to determine what is driving that trend. Both techniques pointed to a decline in mercury emissions from human activity as the most likely cause.

Global inventories, on the other hand, have reported opposite trends. These inventories estimate atmospheric emissions using models that incorporate average emission rates of polluting activities and the scale of these activities worldwide.

“Our work shows that it is very important to learn from actual, on-the-ground data to try and improve our models and these emissions estimates. This is very relevant for policy because, if we are not able to accurately estimate past mercury emissions, how are we going to predict how mercury pollution will evolve in the future?” says Ari Feinberg, a former postdoc in the Institute for Data, Systems, and Society (IDSS) and lead author of the study.

The new results could help inform scientists who are embarking on a collaborative, global effort to evaluate pollution models and develop a more in-depth understanding of what drives global atmospheric concentrations of mercury.

However, due to a lack of data from global monitoring stations and limitations in the scientific understanding of mercury pollution, the researchers couldn’t pinpoint a definitive reason for the mismatch between the inventories and the recorded measurements.

“It seems like mercury emissions are moving in the right direction, and could continue to do so, which is heartening to see. But this was as far as we could get with mercury. We need to keep measuring and advancing the science,” adds co-author Noelle Selin, an MIT professor in the IDSS and the Department of Earth, Atmospheric and Planetary Sciences (EAPS).

Feinberg and Selin, his MIT postdoctoral advisor, are joined on the paper by an international team of researchers that contributed atmospheric mercury measurement data and statistical methods to the study. The research appears this week in the Proceedings of the National Academy of Sciences.

Mercury mismatch

The Minamata Convention is a global treaty that aims to cut human-caused emissions of mercury, a potent neurotoxin that enters the atmosphere from sources like coal-fired power plants and small-scale gold mining.

The treaty, which was signed in 2013 and went into force in 2017, is evaluated every five years. The first meeting of its conference of parties coincided with disheartening news reports that said global inventories of mercury emissions, compiled in part from information from national inventories, had increased despite international efforts to reduce them.

This was puzzling news for environmental scientists like Selin. Data from monitoring stations showed atmospheric mercury concentrations declining during the same period.

Bottom-up inventories combine emission factors, such as the amount of mercury that enters the atmosphere when coal mined in a certain region is burned, with estimates of pollution-causing activities, like how much of that coal is burned in power plants.

“The big question we wanted to answer was: What is actually happening to mercury in the atmosphere and what does that say about anthropogenic emissions over time?” Selin says.

Modeling mercury emissions is especially tricky. First, mercury is the only metal that is in liquid form at room temperature, so it has unique properties. Moreover, mercury that has been removed from the atmosphere by sinks like the ocean or land can be re-emitted later, making it hard to identify primary emission sources.

At the same time, mercury is more difficult to study in laboratory settings than many other air pollutants, especially due to its toxicity, so scientists have limited understanding of all chemical reactions mercury can undergo. There is also a much smaller network of mercury monitoring stations, compared to other polluting gases like methane and nitrous oxide.

“One of the challenges of our study was to come up with statistical methods that can address those data gaps, because available measurements come from different time periods and different measurement networks,” Feinberg says.

Multifaceted models

The researchers compiled data from 51 stations in the Northern Hemisphere. They used statistical techniques to aggregate data from nearby stations, which helped them overcome data gaps and evaluate regional trends.

By combining data from 11 regions, their analysis indicated that Northern Hemisphere atmospheric mercury concentrations declined by about 10 percent between 2005 and 2020.

Then the researchers used two modeling methods — biogeochemical box modeling and chemical transport modeling — to explore possible causes of that decline.  Box modeling was used to run hundreds of thousands of simulations to evaluate a wide array of emission scenarios. Chemical transport modeling is more computationally expensive but enables researchers to assess the impacts of meteorology and spatial variations on trends in selected scenarios.

For instance, they tested one hypothesis that there may be an additional environmental sink that is removing more mercury from the atmosphere than previously thought. The models would indicate the feasibility of an unknown sink of that magnitude.

“As we went through each hypothesis systematically, we were pretty surprised that we could really point to declines in anthropogenic emissions as being the most likely cause,” Selin says.

Their work underscores the importance of long-term mercury monitoring stations, Feinberg adds. Many stations the researchers evaluated are no longer operational because of a lack of funding.

While their analysis couldn’t zero in on exactly why the emissions inventories didn’t match up with actual data, they have a few hypotheses.

One possibility is that global inventories are missing key information from certain countries. For instance, the researchers resolved some discrepancies when they used a more detailed regional inventory from China. But there was still a gap between observations and estimates.

They also suspect the discrepancy might be the result of changes in two large sources of mercury that are particularly uncertain: emissions from small-scale gold mining and mercury-containing products.

Small-scale gold mining involves using mercury to extract gold from soil and is often performed in remote parts of developing countries, making it hard to estimate. Yet small-scale gold mining contributes about 40 percent of human-made emissions.

In addition, it’s difficult to determine how long it takes the pollutant to be released into the atmosphere from discarded products like thermometers or scientific equipment.

“We’re not there yet where we can really pinpoint which source is responsible for this discrepancy,” Feinberg says.

In the future, researchers from multiple countries, including MIT, will collaborate to study and improve the models they use to estimate and evaluate emissions. This research will be influential in helping that project move the needle on monitoring mercury, he says.

This research was funded by the Swiss National Science Foundation, the U.S. National Science Foundation, and the U.S. Environmental Protection Agency.


Cancer biologists discover a new mechanism for an old drug

Study reveals the drug, 5-fluorouracil, acts differently in different types of cancer — a finding that could help researchers design better drug combinations.


Since the 1950s, a chemotherapy drug known as 5-fluorouracil has been used to treat many types of cancer, including blood cancers and cancers of the digestive tract.

Doctors have long believed that this drug works by damaging the building blocks of DNA. However, a new study from MIT has found that in cancers of the colon and other gastrointestinal cancers, it actually kills cells by interfering with RNA synthesis.

The findings could have a significant effect on how doctors treat many cancer patients. Usually, 5-fluorouracil is given in combination with chemotherapy drugs that damage DNA, but the new study found that for colon cancer, this combination does not achieve the synergistic effects that were hoped for. Instead, combining 5-FU with drugs that affect RNA synthesis could make it more effective in patients with GI cancers, the researchers say.

“Our work is the most definitive study to date showing that RNA incorporation of the drug, leading to an RNA damage response, is responsible for how the drug works in GI cancers,” says Michael Yaffe, a David H. Koch Professor of Science at MIT, the director of the MIT Center for Precision Cancer Medicine, and a member of MIT’s Koch Institute for Integrative Cancer Research. “Textbooks implicate the DNA effects of the drug as the mechanism in all cancer types, but our data shows that RNA damage is what’s really important for the types of tumors, like GI cancers, where the drug is used clinically.”

Yaffe, the senior author of the new study, hopes to plan clinical trials of 5-fluorouracil with drugs that would enhance its RNA-damaging effects and kill cancer cells more effectively.

Jung-Kuei Chen, a Koch Institute research scientist, and Karl Merrick, a former MIT postdoc, are the lead authors of the paper, which appears today in Cell Reports Medicine.

An unexpected mechanism

Clinicians use 5-fluorouracil (5-FU) as a first-line drug for colon, rectal, and pancreatic cancers. It’s usually given in combination with oxaliplatin or irinotecan, which damage DNA in cancer cells. The combination was thought to be effective because 5-FU can disrupt the synthesis of DNA nucleotides. Without those building blocks, cells with damaged DNA wouldn’t be able to efficiently repair the damage and would undergo cell death.

Yaffe’s lab, which studies cell signaling pathways, wanted to further explore the underlying mechanisms of how these drug combinations preferentially kill cancer cells.

The researchers began by testing 5-FU in combination with oxaliplatin or irinotecan in colon cancer cells grown in the lab. To their surprise, they found that not only were the drugs not synergistic, in many cases they were less effective at killing cancer cells than what one would expect by simply adding together the effects of 5-FU or the DNA-damaging drug given alone.

“One would have expected that these combinations to cause synergistic cancer cell death because you are targeting two different aspects of a shared process: breaking DNA, and making nucleotides,” Yaffe says. “Karl looked at a dozen colon cancer cell lines, and not only were the drugs not synergistic, in most cases they were antagonistic. One drug seemed to be undoing what the other drug was doing.”

Yaffe’s lab then teamed up with Adam Palmer, an assistant professor of pharmacology at the University of North Carolina School of Medicine, who specializes in analyzing data from clinical trials. Palmer’s research group examined data from colon cancer patients who had been on one or more of these drugs and showed that the drugs did not show synergistic effects on survival in most patients.

“This confirmed that when you give these combinations to people, it’s not generally true that the drugs are actually working together in a beneficial way within an individual patient,” Yaffe says. “Instead, it appears that one drug in the combination works well for some patients while another drug in the combination works well in other patients. We just cannot yet predict which drug by itself is best for which patient, so everyone gets the combination.”

These results led the researchers to wonder just how 5-FU was working, if not by disrupting DNA repair. Studies in yeast and mammalian cells had shown that the drug also gets incorporated into RNA nucleotides, but there has been dispute over how much this RNA damage contributes to the drug’s toxic effects on cancer cells.

Inside cells, 5-FU is broken down into two different metabolites. One of these gets incorporated into DNA nucleotides, and other into RNA nucleotides. In studies of colon cancer cells, the researchers found that the metabolite that interferes with RNA was much more effective at killing colon cancer cells than the one that disrupts DNA.

That RNA damage appears to primarily affect ribosomal RNA, a molecule that forms part of the ribosome — a cell organelle responsible for assembling new proteins. If cells can’t form new ribosomes, they can’t produce enough proteins to function. Additionally, the lack of undamaged ribosomal RNA causes cells to destroy a large set of proteins that normally bind up the RNA to make new functional ribosomes.

The researchers are now exploring how this ribosomal RNA damage leads cells to under programmed cell death, or apoptosis. They hypothesize that sensing of the damaged RNAs within cell structures called lysosomes somehow triggers an apoptotic signal.

“My lab is very interested in trying to understand the signaling events during disruption of ribosome biogenesis, particularly in GI cancers and even some ovarian cancers, that cause the cells to die. Somehow, they must be monitoring the quality control of new ribosome synthesis, which somehow is connected to the death pathway machinery,” Yaffe says.

New combinations

The findings suggest that drugs that stimulate ribosome production could work together with 5-FU to make a highly synergistic combination. In their study, the researchers showed that a molecule that inhibits KDM2A, a suppressor of ribosome production, helped to boost the rate of cell death in colon cancer cells treated with 5-FU.

The findings also suggest a possible explanation for why combining 5-FU with a DNA-damaging drug often makes both drugs less effective. Some DNA damaging drugs send a signal to the cell to stop making new ribosomes, which would negate 5-FU’s effect on RNA. A better approach may be to give each drug a few days apart, which would give patients the potential benefits of each drug, without having them cancel each other out.

“Importantly, our data doesn’t say that these combination therapies are wrong. We know they’re effective clinically. It just says that if you adjust how you give these drugs, you could potentially make those therapies even better, with relatively minor changes in the timing of when the drugs are given,” Yaffe says.

He is now hoping to work with collaborators at other institutions to run a phase 2 or 3 clinical trial in which patients receive the drugs on an altered schedule.

“A trial is clearly needed to look for efficacy, but it should be straightforward to initiate because these are already clinically accepted drugs that form the standard of care for GI cancers. All we’re doing is changing the timing with which we give them,” he says.

The researchers also hope that their work could lead to the identification of biomarkers that predict which patients’ tumors will be more susceptible to drug combinations that include 5-FU. One such biomarker could be RNA polymerase I, which is active when cells are producing a lot of ribosomal RNA.

The research was funded by the Damon Runyon Cancer Research Foundation, a fellowship from the Ludwig Center at MIT, the National Institutes of Health, the Ovarian Cancer Research Fund, the Charles and Marjorie Holloway Foundation, and the STARR Cancer Consortium.


Victor Ambros ’75, PhD ’79 and Gary Ruvkun share Nobel Prize in Physiology or Medicine

The scientists, who worked together as postdocs at MIT, are honored for their discovery of microRNA — a class of molecules that are critical for gene regulation.


MIT alumnus Victor Ambros ’75, PhD ’79 and Gary Ruvkun, who did his postdoctoral training at MIT, will share the 2024 Nobel Prize in Physiology or Medicine, the Royal Swedish Academy of Sciences announced this morning in Stockholm.

Ambros, a professor at the University of Massachusetts Chan Medical School, and Ruvkun, a professor at Harvard Medical School and Massachusetts General Hospital, were honored for their discovery of microRNA, a class of tiny RNA molecules that play a critical role in gene control.

“Their groundbreaking discovery revealed a completely new principle of gene regulation that turned out to be essential for multicellular organisms, including humans. It is now known that the human genome codes for over one thousand microRNAs. Their surprising discovery revealed an entirely new dimension to gene regulation. MicroRNAs are proving to be fundamentally important for how organisms develop and function,” the Nobel committee said in its announcement today.

During the late 1980s, Ambros and Ruvkun both worked as postdocs in the laboratory of H. Robert Horvitz, a David H. Koch Professor at MIT, who was awarded the Nobel Prize in 2002.

While in Horvitz’s lab, the pair began studying gene control in the roundworm C. elegans — an effort that laid the groundwork for their Nobel discoveries. They studied two mutant forms of the worm, known as lin-4 and lin-14, that showed defects in the timing of the activation of genetic programs that control development.

In the early 1990s, while Ambros was a faculty member at Harvard University, he made a surprising discovery. The lin-4 gene, instead of encoding a protein, produced a very short RNA molecule that appeared to inhibit the expression of lin-14.

At the same time, Ruvkun was continuing to study these C. elegans genes in his lab at MGH and Harvard. He showed that lin-4 did not inhibit lin-14 by preventing the lin-14 gene from being transcribed into messenger RNA; instead, it appeared to turn off the gene’s expression later on, by preventing production of the protein encoded by lin-14.

The two compared results and realized that the sequence of lin-4 was complementary to some short sequences of lin-14. Lin-4, they showed, was binding to messenger RNA encoding lin-14 and blocking it from being translated into protein — a mechanism for gene control that had never been seen before. Those results were published in two articles in the journal Cell in 1993.

In an interview with the Journal of Cell Biology, Ambros credited the contributions of his collaborators, including his wife, Rosalind “Candy” Lee ’76, and postdoc Rhonda Feinbaum, who both worked in his lab, cloned and characterized the lin-4 microRNA, and were co-authors on one of the 1993 Cell papers.

In 2000, Ruvkun published the discovery of another microRNA molecule, encoded by a gene called let-7, which is found throughout the animal kingdom. Since then, more than 1,000 microRNA genes have been found in humans.

“Ambros and Ruvkun’s seminal discovery in the small worm C. elegans was unexpected, and revealed a new dimension to gene regulation, essential for all complex life forms,” the Nobel citation declared.

Ambros, who was born in New Hampshire and grew up in Vermont, earned his PhD at MIT under the supervision of David Baltimore, then an MIT professor of biology, who received a Nobel Prize in 1973. Ambros was a longtime faculty member at Dartmouth College before joining the faculty at the University of Massachusetts Chan Medical School in 2008.

Ruvkun is a graduate of the University of California at Berkeley and earned his PhD at Harvard University before joining Horvitz’s lab at MIT.


Translating MIT research into real-world results

MIT’s innovation and entrepreneurship system helps launch water, food, and ag startups with social and economic benefits.


Inventive solutions to some of the world’s most critical problems are being discovered in labs, classrooms, and centers across MIT every day. Many of these solutions move from the lab to the commercial world with the help of over 85 Institute resources that comprise MIT’s robust innovation and entrepreneurship (I&E) ecosystem. The Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) draws on MIT’s wealth of I&E knowledge and experience to help researchers commercialize their breakthrough technologies through the J-WAFS Solutions grant program. By collaborating with I&E programs on campus, J-WAFS prepares MIT researchers for the commercial world, where their novel innovations aim to improve productivity, accessibility, and sustainability of water and food systems, creating economic, environmental, and societal benefits along the way.

The J-WAFS Solutions program launched in 2015 with support from Community Jameel, an international organization that advances science and learning for communities to thrive. Since 2015, J-WAFS Solutions has supported 19 projects with one-year grants of up to $150,000, with some projects receiving renewal grants for a second year of support. Solutions projects all address challenges related to water or food. Modeled after the esteemed grant program of MIT’s Deshpande Center for Technological Innovation, and initially administered by Deshpande Center staff, the J-WAFS Solutions program follows a similar approach by supporting projects that have already completed the basic research and proof-of-concept phases. With technologies that are one to three years away from commercialization, grantees work on identifying their potential markets and learn to focus on how their technology can meet the needs of future customers.

“Ingenuity thrives at MIT, driving inventions that can be translated into real-world applications for widespread adoption, implantation, and use,” says J-WAFS Director Professor John H. Lienhard V. “But successful commercialization of MIT technology requires engineers to focus on many challenges beyond making the technology work. MIT’s I&E network offers a variety of programs that help researchers develop technology readiness, investigate markets, conduct customer discovery, and initiate product design and development,” Lienhard adds. “With this strong I&E framework, many J-WAFS Solutions teams have established startup companies by the completion of the grant. J-WAFS-supported technologies have had powerful, positive effects on human welfare. Together, the J-WAFS Solutions program and MIT’s I&E ecosystem demonstrate how academic research can evolve into business innovations that make a better world,” Lienhard says.

Creating I&E collaborations

In addition to support for furthering research, J-WAFS Solutions grants allow faculty, students, postdocs, and research staff to learn the fundamentals of how to transform their work into commercial products and companies. As part of the grant requirements, researchers must interact with mentors through MIT Venture Mentoring Service (VMS). VMS connects MIT entrepreneurs with teams of carefully selected professionals who provide free and confidential mentorship, guidance, and other services to help advance ideas into for-profit, for-benefit, or nonprofit ventures. Since 2000, VMS has mentored over 4,600 MIT entrepreneurs across all industries, through a dynamic and accomplished group of nearly 200 mentors who volunteer their time so that others may succeed. The mentors provide impartial and unbiased advice to members of the MIT community, including MIT alumni in the Boston area. J-WAFS Solutions teams have been guided by 21 mentors from numerous companies and nonprofits. Mentors often attend project events and progress meetings throughout the grant period.

“Working with VMS has provided me and my organization with a valuable sounding board for a range of topics, big and small,” says Eric Verploegen PhD ’08, former research engineer in the MIT D-Lab and founder of J-WAFS spinout CoolVeg. Along with professors Leon Glicksman and Daniel Frey, Verploegen received a J-WAFS Solutions grant in 2021 to commercialize cold-storage chambers that use evaporative cooling to help farmers preserve fruits and vegetables in rural off-grid communities. Verploegen started CoolVeg in 2022 to increase access and adoption of open-source, evaporative cooling technologies through collaborations with businesses, research institutions, nongovernmental organizations, and government agencies. “Working as a solo founder at my nonprofit venture, it is always great to have avenues to get feedback on communications approaches, overall strategy, and operational issues that my mentors have experience with,” Verploegen says. Three years after the initial Solutions grant, one of the VMS mentors assigned to the evaporative cooling team still acts as a mentor to Verploegen today.

Another Solutions grant requirement is for teams to participate in the Spark program — a free, three-week course that provides an entry point for researchers to explore the potential value of their innovation. Spark is part of the National Science Foundation’s (NSF) Innovation Corps (I-Corps), which is an “immersive, entrepreneurial training program that facilitates the transformation of invention to impact.” In 2018, MIT received an award from the NSF, establishing the New England Regional Innovation Corps Node (NE I-Corps) to deliver I-Corps training to participants across New England. Trainings are open to researchers, engineers, scientists, and others who want to engage in a customer discovery process for their technology. Offered regularly throughout the year, the Spark course helps participants identify markets and explore customer needs in order to understand how their technologies can be positioned competitively in their target markets. They learn to assess barriers to adoption, as well as potential regulatory issues or other challenges to commercialization. NE-I-Corps reports that since its start, over 1,200 researchers from MIT have completed the program and have gone on to launch 175 ventures, raising over $3.3 billion in funding from grants and investors, and creating over 1,800 jobs.

Constantinos Katsimpouras, a research scientist in the Department of Chemical Engineering, went through the NE I-Corps Spark program to better understand the customer base for a technology he developed with professors Gregory Stephanopoulos and Anthony Sinskey. The group received a J-WAFS Solutions grant in 2021 for their microbial platform that converts food waste from the dairy industry into valuable products. “As a scientist with no prior experience in entrepreneurship, the program introduced me to important concepts and tools for conducting customer interviews and adopting a new mindset,” notes Katsimpouras. “Most importantly, it encouraged me to get out of the building and engage in interviews with potential customers and stakeholders, providing me with invaluable insights and a deeper understanding of my industry,” he adds. These interviews also helped connect the team with companies willing to provide resources to test and improve their technology — a critical step to the scale-up of any lab invention.

In the case of Professor Cem Tasan’s research group in the Department of Materials Science and Engineering, the I-Corps program led them to the J-WAFS Solutions grant, instead of the other way around. Tasan is currently working with postdoc Onur Guvenc on a J-WAFS Solutions project to manufacture formable sheet metal by consolidating steel scrap without melting, thereby reducing water use compared to traditional steel processing. Before applying for the Solutions grant, Guvenc took part in NE I-Corps. Like Katsimpouras, Guvenc benefited from the interaction with industry. “This program required me to step out of the lab and engage with potential customers, allowing me to learn about their immediate challenges and test my initial assumptions about the market,” Guvenc recalls. “My interviews with industry professionals also made me aware of the connection between water consumption and steelmaking processes, which ultimately led to the J-WAFS 2023 Solutions Grant,” says Guvenc.

After completing the Spark program, participants may be eligible to apply for the Fusion program, which provides microgrants of up to $1,500 to conduct further customer discovery. The Fusion program is self-paced, requiring teams to conduct 12 additional customer interviews and craft a final presentation summarizing their key learnings. Professor Patrick Doyle’s J-WAFS Solutions team completed the Spark and Fusion programs at MIT. Most recently, their team was accepted to join the NSF I-Corps National program with a $50,000 award. The intensive program requires teams to complete an additional 100 customer discovery interviews over seven weeks. Located in the Department of Chemical Engineering, the Doyle lab is working on a sustainable microparticle hydrogel system to rapidly remove micropollutants from water. The team’s focus has expanded to higher value purifications in amino acid and biopharmaceutical manufacturing applications. Devashish Gokhale PhD ’24 worked with Doyle on much of the underlying science.

“Our platform technology could potentially be used for selective separations in very diverse market segments, ranging from individual consumers to large industries and government bodies with varied use-cases,” Gokhale explains. He goes on to say, “The I-Corps Spark program added significant value by providing me with an effective framework to approach this problem ... I was assigned a mentor who provided critical feedback, teaching me how to formulate effective questions and identify promising opportunities.” Gokhale says that by the end of Spark, the team was able to identify the best target markets for their products. He also says that the program provided valuable seminars on topics like intellectual property, which was helpful in subsequent discussions the team had with MIT’s Technology Licensing Office.

Another member of Doyle’s team, Arjav Shah, a recent PhD from MIT’s Department of Chemical Engineering and a current MBA candidate at the MIT Sloan School of Management, is spearheading the team’s commercialization plans. Shah attended Fusion last fall and hopes to lead efforts to incorporate a startup company called hydroGel.  “I admire the hypothesis-driven approach of the I-Corps program,” says Shah. “It has enabled us to identify our customers’ biggest pain points, which will hopefully lead us to finding a product-market fit.” He adds “based on our learnings from the program, we have been able to pivot to impact-driven, higher-value applications in the food processing and biopharmaceutical industries.” Postdoc Luca Mazzaferro will lead the technical team at hydroGel alongside Shah.

In a different project, Qinmin Zheng, a postdoc in the Department of Civil and Environmental Engineering, is working with Professor Andrew Whittle and Lecturer Fábio Duarte. Zheng plans to take the Fusion course this fall to advance their J-WAFS Solutions project that aims to commercialize a novel sensor to quantify the relative abundance of major algal species and provide early detection of harmful algal blooms. After completing Spark, Zheng says he’s “excited to participate in the Fusion program, and potentially the National I-Corps program, to further explore market opportunities and minimize risks in our future product development.”

Economic and societal benefits

Commercializing technologies developed at MIT is one of the ways J-WAFS helps ensure that MIT research advances will have real-world impacts in water and food systems. Since its inception, the J-WAFS Solutions program has awarded 28 grants (including renewals), which have supported 19 projects that address a wide range of global water and food challenges. The program has distributed over $4 million to 24 professors, 11 research staff, 15 postdocs, and 30 students across MIT. Nearly half of all J-WAFS Solutions projects have resulted in spinout companies or commercialized products, including eight companies to date plus two open-source technologies.

Nona Technologies is an example of a J-WAFS spinout that is helping the world by developing new approaches to produce freshwater for drinking. Desalination — the process of removing salts from seawater — typically requires a large-scale technology called reverse osmosis. But Nona created a desalination device that can work in remote off-grid locations. By separating salt and bacteria from water using electric current through a process called ion concentration polarization (ICP), their technology also reduces overall energy consumption. The novel method was developed by Jongyoon Han, professor of electrical engineering and biological engineering, and research scientist Junghyo Yoon. Along with Bruce Crawford, a Sloan MBA alum, Han and Yoon created Nona Technologies to bring their lightweight, energy-efficient desalination technology to the market.

“My feeling early on was that once you have technology, commercialization will take care of itself,” admits Crawford. The team completed both the Spark and Fusion programs and quickly realized that much more work would be required. “Even in our first 24 interviews, we learned that the two first markets we envisioned would not be viable in the near term, and we also got our first hints at the beachhead we ultimately selected,” says Crawford. Nona Technologies has since won MIT’s $100K Entrepreneurship Competition, received media attention from outlets like Newsweek and Fortune, and hired a team that continues to further the technology for deployment in resource-limited areas where clean drinking water may be scarce. 

Food-borne diseases sicken millions of people worldwide each year, but J-WAFS researchers are addressing this issue by integrating molecular engineering, nanotechnology, and artificial intelligence to revolutionize food pathogen testing. Professors Tim Swager and Alexander Klibanov, of the Department of Chemistry, were awarded one of the first J-WAFS Solutions grants for their sensor that targets food safety pathogens. The sensor uses specialized droplets that behave like a dynamic lens, changing in the presence of target bacteria in order to detect dangerous bacterial contamination in food. In 2018, Swager launched Xibus Systems Inc. to bring the sensor to market and advance food safety for greater public health, sustainability, and economic security.

“Our involvement with the J-WAFS Solutions Program has been vital,” says Swager. “It has provided us with a bridge between the academic world and the business world and allowed us to perform more detailed work to create a usable application,” he adds. In 2022, Xibus developed a product called XiSafe, which enables the detection of contaminants like salmonella and listeria faster and with higher sensitivity than other food testing products. The innovation could save food processors billions of dollars worldwide and prevent thousands of food-borne fatalities annually.

J-WAFS Solutions companies have raised nearly $66 million in venture capital and other funding. Just this past June, J-WAFS spinout SiTration announced that it raised an $11.8 million seed round. Jeffrey Grossman, a professor in MIT’s Department of Materials Science and Engineering, was another early J-WAFS Solutions grantee for his work on low-cost energy-efficient filters for desalination. The project enabled the development of nanoporous membranes and resulted in two spinout companies, Via Separations and SiTration. SiTration was co-founded by Brendan Smith PhD ’18, who was a part of the original J-WAFS team. Smith is CEO of the company and has overseen the advancement of the membrane technology, which has gone on to reduce cost and resource consumption in industrial wastewater treatment, advanced manufacturing, and resource extraction of materials such as lithium, cobalt, and nickel from recycled electric vehicle batteries. The company also recently announced that it is working with the mining company Rio Tinto to handle harmful wastewater generated at mines.

But it's not just J-WAFS spinout companies that are producing real-world results. Products like the ECC Vial — a portable, low-cost method for E. coli detection in water — have been brought to the market and helped thousands of people. The test kit was developed by MIT D-Lab Lecturer Susan Murcott and Professor Jeffrey Ravel of the MIT History Section. The duo received a J-WAFS Solutions grant in 2018 to promote safely managed drinking water and improved public health in Nepal, where it is difficult to identify which wells are contaminated by E. coli. By the end of their grant period, the team had manufactured approximately 3,200 units, of which 2,350 were distributed — enough to help 12,000 people in Nepal. The researchers also trained local Nepalese on best manufacturing practices.

“It’s very important, in my life experience, to follow your dream and to serve others,” says Murcott. Economic success is important to the health of any venture, whether it’s a company or a product, but equally important is the social impact — a philosophy that J-WAFS research strives to uphold. “Do something because it’s worth doing and because it changes people’s lives and saves lives,” Murcott adds.

As J-WAFS prepares to celebrate its 10th anniversary this year, we look forward to continued collaboration with MIT’s many I&E programs to advance knowledge and develop solutions that will have tangible effects on the world’s water and food systems.

Learn more about the J-WAFS Solutions program and about innovation and entrepreneurship at MIT.


An interstellar instrument takes a final bow

The Plasma Science Experiment aboard NASA’s Voyager 2 spacecraft turns off after 47 years and 15 billion miles.


They planned to fly for four years and to get as far as Jupiter and Saturn. But nearly half a century and 15 billion miles later, NASA’s twin Voyager spacecraft have far exceeded their original mission, winging past the outer planets and busting out of our heliosphere, beyond the influence of the sun. The probes are currently making their way through interstellar space, traveling farther than any human-made object.

Along their improbable journey, the Voyagers made first-of-their-kind observations at all four giant outer planets and their moons using only a handful of instruments, including MIT’s Plasma Science Experiments — identical plasma sensors that were designed and built in the 1970s in Building 37 by MIT scientists and engineers.

The Plasma Science Experiment (also known as the Plasma Spectrometer, or PLS for short) measured charged particles in planetary magnetospheres, the solar wind, and the interstellar medium, the material between stars. Since launching on the Voyager 2 spacecraft in 1977, the PLS has revealed new phenomena near all the outer planets and in the solar wind across the solar system. The experiment played a crucial role in confirming the moment when Voyager 2 crossed the heliosphere and moved outside of the sun’s regime, into interstellar space.

Now, to conserve the little power left on Voyager 2 and prolong the mission’s life, the Voyager scientists and engineers have made the decision to shut off MIT’s Plasma Science Experiment. It’s the first in a line of science instruments that will progressively blink off over the coming years. On Sept. 26, the Voyager 2 PLS sent its last communication from 12.7 billion miles away, before it received the command to shut down.

MIT News spoke with John Belcher, the Class of 1922 Professor of Physics at MIT, who was a member of the original team that designed and built the plasma spectrometers, and John Richardson, principal research scientist at MIT’s Kavli Institute for Astrophysics and Space Research, who is the experiment’s principal investigator. Both Belcher and Richardson offered their reflections on the retirement of this interstellar piece of MIT history.

Q: Looking back at the experiment’s contributions, what are the greatest hits, in terms of what MIT’s Plasma Spectrometer has revealed about the solar system and interstellar space?

Richardson: A key PLS finding at Jupiter was the discovery of the Io torus, a plasma donut surrounding Jupiter, formed from sulphur and oxygen from Io’s volcanos (which were discovered in Voyager images). At Saturn, PLS found a magnetosphere full of water and oxygen that had been knocked off of Saturn’s icy moons. At Uranus and Neptune, the tilt of the magnetic fields led to PLS seeing smaller density features, with Uranus’ plasma disappearing near the planet. Another key PLS observation was of the termination shock, which was the first observation of the plasma at the largest shock in the solar system, where the solar wind stopped being supersonic. This boundary had a huge drop in speed and an increase in the density and temperature of the solar wind. And finally, PLS documented Voyager 2’s crossing of the heliopause by detecting a stopping of outward-flowing plasma. This signaled the end of the solar wind and the beginning of the local interstellar medium (LISM). Although not designed to measure the LISM, PLS constantly measured the interstellar plasma currents beyond the heliosphere. It is very sad to lose this instrument and data!

Belcher: It is important to emphasize that PLS was the result of decades of development by MIT Professor Herbert Bridge (1919-1995) and Alan Lazarus (1931-2014). The first version of the instrument they designed was flown on Explorer 10 in 1961. And the most recent version is flying on the Solar Probe, which is collecting measurements very close to the sun to understand the origins of solar wind. Bridge was the principal investigator for plasma probes on spacecraft which visited the sun and every major planetary body in the solar system.

Q: During their tenure aboard the Voyager probes, how did the plasma sensors do their job over the last 47 years?

Richardson: There were four Faraday cup detectors designed by Herb Bridge that measured currents from ions and electrons that entered the detectors. By measuring these particles at different energies, we could find the plasma velocity, density, and temperature in the solar wind and in the four planetary magnetospheres Voyager encountered. Voyager data were (and are still) sent to Earth every day and received by NASA’s deep space network of antennae. Keeping two 1970s-era spacecraft going for 47 years and counting has been an amazing feat of JPL engineering prowess — you can google the most recent rescue when Voyager 1 lost some memory in November of 2023 and stopped sending data. JPL figured out the problem and was able to reprogram the flight data system from 15 billion miles away, and all is back to normal now. Shutting down PLS involves sending a command which will get to Voyager 2 about 19 hours later, providing the rest of the spacecraft enough power to continue.

Q: Once the plasma sensors have shut down, how much more could Voyager do, and how far might it still go?

Richardson: Voyager will still measure the galactic cosmic rays, magnetic fields, and plasma waves. The available power decreases about 4 watts per year as the plutonium which powers them decays. We hope to keep some of the instruments running until the mid-2030s, but that will be a challenge as power levels decrease.

Belcher: Nick Oberg at the Kapteyn Astronomical Institute in the Netherlands has made an exhaustive study of the future of the spacecraft, using data from the European Space Agency’s spacecraft Gaia. In about 30,000 years, the spacecraft will reach the distance to the nearest stars. Because space is so vast, there is zero chance that the spacecraft will collide directly with a star in the lifetime of the universe. However, the spacecraft surface will erode by microcollisions with vast clouds of interstellar dust, but this happens very slowly. 

In Oberg’s estimate, the Golden Records [identical records that were placed aboard each probe, that contain selected sounds and images to represent life on Earth] are likely to survive for a span of over 5 billion years. After those 5 billion years, things are difficult to predict, since at this point, the Milky Way will collide with its massive neighbor, the Andromeda galaxy. During this collision, there is a one in five chance that the spacecraft will be flung into the intergalactic medium, where there is little dust and little weathering. In that case, it is possible that the spacecraft will survive for trillions of years. A trillion years is about 100 times the current age of the universe. The Earth ceases to exist in about 6 billion years, when the sun enters its red giant phase and engulfs it.

In a “poor man’s” version of the Golden Record, Robert Butler, the chief engineer of the Plasma Instrument, inscribed the names of the MIT engineers and scientists who had worked on the spacecraft on the collector plate of the side-looking cup. Butler’s home state was New Hampshire, and he put the state motto, “Live Free or Die,” at the top of the list of names. Thanks to Butler, although New Hampshire will not survive for a trillion years, its state motto might. The flight spare of the PLS instrument is now displayed at the MIT Museum, where you can see the text of Butler’s message by peering into the side-looking sensor. 


AI pareidolia: Can machines spot faces in inanimate objects?

New dataset of “illusory” faces reveals differences between human and algorithmic face detection, links to animal face recognition, and a formula predicting where people most often perceive faces.


In 1994, Florida jewelry designer Diana Duyser discovered what she believed to be the Virgin Mary’s image in a grilled cheese sandwich, which she preserved and later auctioned for $28,000. But how much do we really understand about pareidolia, the phenomenon of seeing faces and patterns in objects when they aren’t really there? 

A new study from the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) delves into this phenomenon, introducing an extensive, human-labeled dataset of 5,000 pareidolic images, far surpassing previous collections. Using this dataset, the team discovered several surprising results about the differences between human and machine perception, and how the ability to see faces in a slice of toast might have saved your distant relatives’ lives.

“Face pareidolia has long fascinated psychologists, but it’s been largely unexplored in the computer vision community,” says Mark Hamilton, MIT PhD student in electrical engineering and computer science, CSAIL affiliate, and lead researcher on the work. “We wanted to create a resource that could help us understand how both humans and AI systems process these illusory faces.”

So what did all of these fake faces reveal? For one, AI models don’t seem to recognize pareidolic faces like we do. Surprisingly, the team found that it wasn’t until they trained algorithms to recognize animal faces that they became significantly better at detecting pareidolic faces. This unexpected connection hints at a possible evolutionary link between our ability to spot animal faces — crucial for survival — and our tendency to see faces in inanimate objects. “A result like this seems to suggest that pareidolia might not arise from human social behavior, but from something deeper: like quickly spotting a lurking tiger, or identifying which way a deer is looking so our primordial ancestors could hunt,” says Hamilton.

A row of five photos of animal faces atop five photos of inanimate objects that look like faces

Another intriguing discovery is what the researchers call the “Goldilocks Zone of Pareidolia,” a class of images where pareidolia is most likely to occur. “There’s a specific range of visual complexity where both humans and machines are most likely to perceive faces in non-face objects,” William T. Freeman, MIT professor of electrical engineering and computer science and principal investigator of the project says. “Too simple, and there’s not enough detail to form a face. Too complex, and it becomes visual noise.”

To uncover this, the team developed an equation that models how people and algorithms detect illusory faces.  When analyzing this equation, they found a clear “pareidolic peak” where the likelihood of seeing faces is highest, corresponding to images that have “just the right amount” of complexity. This predicted “Goldilocks zone” was then validated in tests with both real human subjects and AI face detection systems.

3 photos of clouds above 3 photos of a fruit tart. The left photo of each is “Too Simple” to perceive a face; the middle photo is “Just Right,” and the last photo is “Too Complex"

This new dataset, “Faces in Things,” dwarfs those of previous studies that typically used only 20-30 stimuli. This scale allowed the researchers to explore how state-of-the-art face detection algorithms behaved after fine-tuning on pareidolic faces, showing that not only could these algorithms be edited to detect these faces, but that they could also act as a silicon stand-in for our own brain, allowing the team to ask and answer questions about the origins of pareidolic face detection that are impossible to ask in humans. 

To build this dataset, the team curated approximately 20,000 candidate images from the LAION-5B dataset, which were then meticulously labeled and judged by human annotators. This process involved drawing bounding boxes around perceived faces and answering detailed questions about each face, such as the perceived emotion, age, and whether the face was accidental or intentional. “Gathering and annotating thousands of images was a monumental task,” says Hamilton. “Much of the dataset owes its existence to my mom,” a retired banker, “who spent countless hours lovingly labeling images for our analysis.”

The study also has potential applications in improving face detection systems by reducing false positives, which could have implications for fields like self-driving cars, human-computer interaction, and robotics. The dataset and models could also help areas like product design, where understanding and controlling pareidolia could create better products. “Imagine being able to automatically tweak the design of a car or a child’s toy so it looks friendlier, or ensuring a medical device doesn’t inadvertently appear threatening,” says Hamilton.

“It’s fascinating how humans instinctively interpret inanimate objects with human-like traits. For instance, when you glance at an electrical socket, you might immediately envision it singing, and you can even imagine how it would ‘move its lips.’ Algorithms, however, don’t naturally recognize these cartoonish faces in the same way we do,” says Hamilton. “This raises intriguing questions: What accounts for this difference between human perception and algorithmic interpretation? Is pareidolia beneficial or detrimental? Why don’t algorithms experience this effect as we do? These questions sparked our investigation, as this classic psychological phenomenon in humans had not been thoroughly explored in algorithms.”

As the researchers prepare to share their dataset with the scientific community, they’re already looking ahead. Future work may involve training vision-language models to understand and describe pareidolic faces, potentially leading to AI systems that can engage with visual stimuli in more human-like ways.

“This is a delightful paper! It is fun to read and it makes me think. Hamilton et al. propose a tantalizing question: Why do we see faces in things?” says Pietro Perona, the Allen E. Puckett Professor of Electrical Engineering at Caltech, who was not involved in the work. “As they point out, learning from examples, including animal faces, goes only half-way to explaining the phenomenon. I bet that thinking about this question will teach us something important about how our visual system generalizes beyond the training it receives through life.”

Hamilton and Freeman’s co-authors include Simon Stent, staff research scientist at the Toyota Research Institute; Ruth Rosenholtz, principal research scientist in the Department of Brain and Cognitive Sciences, NVIDIA research scientist, and former CSAIL member; and CSAIL affiliates postdoc Vasha DuTell, Anne Harrington MEng ’23, and Research Scientist Jennifer Corbett. Their work was supported, in part, by the National Science Foundation and the CSAIL MEnTorEd Opportunities in Research (METEOR) Fellowship, while being sponsored by the United States Air Force Research Laboratory and the United States Air Force Artificial Intelligence Accelerator. The MIT SuperCloud and Lincoln Laboratory Supercomputing Center provided HPC resources for the researchers’ results.

This work is being presented this week at the European Conference on Computer Vision.


Mars’ missing atmosphere could be hiding in plain sight

A new study shows Mars’ early thick atmosphere could be locked up in the planet’s clay surface.


Mars wasn’t always the cold desert we see today. There’s increasing evidence that water once flowed on the Red Planet’s surface, billions of years ago. And if there was water, there must also have been a thick atmosphere to keep that water from freezing. But sometime around 3.5 billion years ago, the water dried up, and the air, once heavy with carbon dioxide, dramatically thinned, leaving only the wisp of an atmosphere that clings to the planet today.

Where exactly did Mars’ atmosphere go? This question has been a central mystery of Mars’ 4.6-billion-year history.

For two MIT geologists, the answer may lie in the planet’s clay. In a paper appearing today in Science Advances, they propose that much of Mars’ missing atmosphere could be locked up in the planet’s clay-covered crust.

The team makes the case that, while water was present on Mars, the liquid could have trickled through certain rock types and set off a slow chain of reactions that progressively drew carbon dioxide out of the atmosphere and converted it into methane — a form of carbon that could be stored for eons in the planet’s clay surface.

Similar processes occur in some regions on Earth. The researchers used their knowledge of interactions between rocks and gases on Earth and applied that to how similar processes could play out on Mars. They found that, given how much clay is estimated to cover Mars’ surface, the planet’s clay could hold up to 1.7 bar of carbon dioxide, which would be equivalent to around 80 percent of the planet’s initial, early atmosphere.

It’s possible that this sequestered Martian carbon could one day be recovered and converted into propellant to fuel future missions between Mars and Earth, the researchers propose.

“Based on our findings on Earth, we show that similar processes likely operated on Mars, and that copious amounts of atmospheric CO2 could have transformed to methane and been sequestered in clays,” says study author Oliver Jagoutz, professor of geology in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “This methane could still be present and maybe even used as an energy source on Mars in the future.”

The study’s lead author is recent EAPS graduate Joshua Murray PhD ’24.

In the folds

Jagoutz’ group at MIT seeks to identify the geologic processes and interactions that drive the evolution of Earth’s lithosphere — the hard and brittle outer layer that includes the crust and upper mantle, where tectonic plates lie.

In 2023, he and Murray focused on a type of surface clay mineral called smectite, which is known to be a highly effective trap for carbon. Within a single grain of smectite are a multitude of folds, within which carbon can sit undisturbed for billions of years. They showed that smectite on Earth was likely a product of tectonic activity, and that, once exposed at the surface, the clay minerals acted to draw down and store enough carbon dioxide from the atmosphere to cool the planet over millions of years.

Soon after the team reported their results, Jagoutz happened to look at a map of the surface of Mars and realized that much of that planet’s surface was covered in the same smectite clays. Could the clays have had a similar carbon-trapping effect on Mars, and if so, how much carbon could the clays hold?

“We know this process happens, and it is well-documented on Earth. And these rocks and clays exist on Mars,” Jagoutz says. “So, we wanted to try and connect the dots.”

“Every nook and cranny”

Unlike on Earth, where smectite is a consequence of continental plates shifting and uplifting to bring rocks from the mantle to the surface, there is no such tectonic activity on Mars. The team looked for ways in which the clays could have formed on Mars, based on what scientists know of the planet’s history and composition.

For instance, some remote measurements of Mars’ surface suggest that at least part of the planet’s crust contains ultramafic igneous rocks, similar to those that produce smectites through weathering on Earth. Other observations reveal geologic patterns similar to terrestrial rivers and tributaries, where water could have flowed and reacted with the underlying rock.

Jagoutz and Murray wondered whether water could have reacted with Mars’ deep ultramafic rocks in a way that would produce the clays that cover the surface today. They developed a simple model of rock chemistry, based on what is known of how igneous rocks interact with their environment on Earth.

They applied this model to Mars, where scientists believe the crust is mostly made up of igneous rock that is rich in the mineral olivine. The team used the model to estimate the changes that olivine-rich rock might undergo, assuming that water existed on the surface for at least a billion years, and the atmosphere was thick with carbon dioxide.

“At this time in Mars’ history, we think CO2 is everywhere, in every nook and cranny, and water percolating through the rocks is full of CO2 too,” Murray says.

Over about a billion years, water trickling through the crust would have slowly reacted with olivine — a mineral that is rich in a reduced form of iron. Oxygen molecules in water would have bound to the iron, releasing hydrogen as a result and forming the red oxidized iron which gives the planet its iconic color. This free hydrogen would then have combined with carbon dioxide in the water, to form methane. As this reaction progressed over time, olivine would have slowly transformed into another type of iron-rich rock known as serpentine, which then continued to react with water to form smectite.

“These smectite clays have so much capacity to store carbon,” Murray says. “So then we used existing knowledge of how these minerals are stored in clays on Earth, and extrapolate to say, if the Martian surface has this much clay in it, how much methane can you store in those clays?”

He and Jagoutz found that if Mars is covered in a layer of smectite that is 1,100 meters deep, this amount of clay could store a huge amount of methane, equivalent to most of the carbon dioxide in the atmosphere that is thought to have disappeared since the planet dried up.

“We find that estimates of global clay volumes on Mars are consistent with a significant fraction of Mars’ initial CO2 being sequestered as organic compounds within the clay-rich crust,” Murray says. “In some ways, Mars’ missing atmosphere could be hiding in plain sight.”

“Where the CO2 went from an early, thicker atmosphere is a fundamental question in the history of the Mars atmosphere, its climate, and the habitability by microbes,” says Bruce Jakosky, professor emeritus of geology at the University of Colorado and principal investigator on the Mars Atmosphere and Volatile Evolution (MAVEN) mission, which has been orbiting and studying Mars’ upper atmosphere since 2014. Jakosky was not involved with the current study. “Murray and Jagoutz examine the chemical interaction of rocks with the atmosphere as a means of removing CO2. At the high end of our estimates of how much weathering has occurred, this could be a major process in removing CO2 from Mars’ early atmosphere.”

This work was supported, in part, by the National Science Foundation.


Startup helps people fall asleep by aligning audio signals with brainwaves

Elemind, founded by researchers from MIT, has developed a headband that uses acoustic stimulation to move people into a sleep state.


Do you ever toss and turn in bed after a long day, wishing you could just program your brain to turn off and get some sleep?

That may sound like science fiction, but that’s the goal of the startup Elemind, which is using an electroencephalogram (EEG) headband that emits acoustic stimulation aligned with people’s brainwaves to move them into a sleep state more quickly.

In a small study of adults with sleep onset insomnia, 30 minutes of stimulation from the device decreased the time it took them to fall asleep by 10 to 15 minutes. This summer, Elemind began shipping its product to a small group of users as part of an early pilot program.

The company, which was founded by MIT Professor Ed Boyden ’99, MNG ’99; David Wang ’05, SM ’10, PhD ’15; former postdoc Nir Grossman; former Media Lab research affiliate Heather Read; and Meredith Perry, plans to collect feedback from early users before making the device more widely available.

Elemind’s team believes their device offers several advantages over sleeping pills that can cause side effects and addiction.

“We wanted to create a nonchemical option for people who wanted to get great sleep without side effects, so you could get all the benefits of natural sleep without the risks,” says Perry, Elemind’s CEO. “There’s a number of people that we think would benefit from this device, whether you’re a breastfeeding mom that might not want to take a sleep drug, somebody traveling across time zones that wants to fight jet lag, or someone that simply wants to improve your next-day performance and feel like you have more control over your sleep.”

From research to product

Wang’s academic journey at MIT spanned nearly 15 years, during which he earned four degrees, culminating in a PhD in artificial intelligence in 2015. In 2014, Wang was co-teaching a class with Grossman when they began working together to noninvasively measure real-time biological oscillations in the brain and body. Through that work, they became fascinated with a technique for modulating the brain known as phase-locked stimulation, which uses precisely timed visual, physical, or auditory stimulation that lines up with brain activity.

“You’re measuring some kind of changing variable, and then you want to change your stimulus in real time in response to that variable,” explains Boyden, who pointed Wang and Grossman to a set of mathematical techniques that became some of the core intellectual property of Elemind.

Phase-locked stimulation has been used in conjunction with electrodes implanted in the brain to disrupt seizures and tremors for years. But in 2021, Wang, Grossman, Boyden, and their collaborators published a paper showing they could use electrical stimulation from outside the skull to suppress essential tremor syndrome, the most common adult movement disorder.

The results were promising, but the founders decided to start by proving their approach worked in a less regulated space: sleep. They developed a system to deliver auditory pulses timed to promote or suppress alpha oscillations in the brain, which are elevated in insomnia.

That kicked off a years-long product development process that led to the headband device Elemind uses today. The headband measures brainwaves through EEG and feeds the results into Elemind's proprietary algorithms, which are used to dynamically generate audio through a bone conduction driver. The moment the device detects that someone is asleep, the audio is slowly tapered out.

“We have a theory that the sound that we play triggers an auditory-evoked response in the brain,” Wang says. “That means we get your auditory cortex to basically release this voltage burst that sweeps across your brain and interferes with other regions. Some people who have worn Elemind call it a brain jammer. For folks that ruminate a lot before they go to sleep, their brains are actively running. This encourages their brain to quiet down.”

Beyond sleep

Elemind has established a collaboration with eight universities that allows researchers to explore the effectiveness of the company’s approach in a range of use cases, from tremors to memory formation, Alzheimer’s progression, and more.

“We’re not only developing this product, but also advancing the field of neuroscience by collecting high-resolution data to hopefully also help others conduct new research,” Wang says.

The collaborations have led to some exciting results. Researchers at McGill University found that using Elemind’s acoustic stimulation during sleep increased activity in areas of the cortex related to motor function and improved healthy adults’ performance in memory tasks. Other studies have shown the approach can be used to reduce essential tremors in patients and enhance sedation recovery.

Elemind is focused on its sleep application for now, but the company plans to develop other solutions, from medical interventions to memory and focus augmentation, as the science evolves.

“The vision is how do we move beyond sleep into what could ultimately become like an app store for the brain, where you can download a brain state like you download an app?” Perry says. “How can we make this a tool that can be applied to a bunch of different applications with a single piece of hardware that has a lot of different stimulation protocols?”


Research quantifying “nociception” could help improve management of surgical pain

New statistical models based on physiological data from more than 100 surgeries provide objective, accurate measures of the body’s subconscious perception of pain.


The degree to which a surgical patient’s subconscious processing of pain, or “nociception,” is properly managed by their anesthesiologist will directly affect the degree of post-operative drug side effects they’ll experience and the need for further pain management they’ll require. But pain is a subjective feeling to measure, even when patients are awake, much less when they are unconscious. 

In a new study appearing in the Proceedings of the National Academy of Sciences, MIT and Massachusetts General Hospital (MGH) researchers describe a set of statistical models that objectively quantified nociception during surgery. Ultimately, they hope to help anesthesiologists optimize drug dose and minimize post-operative pain and side effects.

The new models integrate data meticulously logged over 18,582 minutes of 101 abdominal surgeries in men and women at MGH. Led by Sandya Subramanian PhD ’21, an assistant professor at the University of California at Berkeley and the University of California at San Francisco, the researchers collected and analyzed data from five physiological sensors as patients experienced a total of 49,878 distinct “nociceptive stimuli” (such as incisions or cautery). Moreover, the team recorded what drugs were administered, and how much and when, to factor in their effects on nociception or cardiovascular measures. They then used all the data to develop a set of statistical models that performed well in retrospectively indicating the body’s response to nociceptive stimuli.

The team’s goal is to furnish such accurate, objective, and physiologically principled information in real time to anesthesiologists who currently have to rely heavily on intuition and past experience in deciding how to administer pain-control drugs during surgery. If anesthesiologists give too much, patients can experience side effects ranging from nausea to delirium. If they give too little, patients may feel excessive pain after they awaken.

“Sandya’s work has helped us establish a principled way to understand and measure nociception (unconscious pain) during general anesthesia,” says study senior author Emery N. Brown, the Edward Hood Taplin Professor of Medical Engineering and Computational Neuroscience in The Picower Institute for Learning and Memory, the Institute for Medical Engineering and Science, and the Department of Brain and Cognitive Sciences at MIT. Brown is also an anesthesiologist at MGH and a professor at Harvard Medical School. “Our next objective is to make the insights that we have gained from Sandya’s studies reliable and practical for anesthesiologists to use during surgery.”

Surgery and statistics

The research began as Subramanian’s doctoral thesis project in Brown’s lab in 2017. The best prior attempts to objectively model nociception have either relied solely on the electrocardiogram (ECG, an indirect indicator of heart-rate variability) or other systems that may incorporate more than one measurement, but were either based on lab experiments using pain stimuli that do not compare in intensity to surgical pain or were validated by statistically aggregating just a few time points across multiple patients’ surgeries, Subramanian says.

“There’s no other place to study surgical pain except for the operating room,” Subramanian says. “We wanted to not only develop the algorithms using data from surgery, but also actually validate it in the context in which we want someone to use it. If we are asking them to track moment-to-moment nociception during an individual surgery, we need to validate it in that same way.”

So she and Brown worked to advance the state of the art by collecting multi-sensor data during the whole course of actual surgeries and by accounting for the confounding effects of the drugs administered. In that way, they hoped to develop a model that could make accurate predictions that remained valid for the same patient all the way through their operation.

Part of the improvements the team achieved arose from tracking patterns of heart rate and also skin conductance. Changes in both of these physiological factors can be indications of the body’s primal “fight or flight” response to nociception or pain, but some drugs used during surgery directly affect cardiovascular state, while skin conductance (or “EDA,” electrodermal activity) remains unaffected. The study measures not only ECG but also backs it up with PPG, an optical measure of heart rate (like the oxygen sensor on a smartwatch), because ECG signals can sometimes be made noisy by all the electrical equipment buzzing away in the operating room. Similarly, Subramanian backstopped EDA measures with measures of skin temperature to ensure that changes in skin conductance from sweat were because of nociception and not simply the patient being too warm. The study also tracked respiration.

Then the authors performed statistical analyses to develop physiologically relevant indices from each of the cardiovascular and skin conductance signals. And once each index was established, further statistical analysis enabled tracking the indices together to produce models that could make accurate, principled predictions of when nociception was occurring and the body’s response.

Nailing nociception

In four versions of the model, Subramanian “supervised” them by feeding them information on when actual nociceptive stimuli occurred so that they could then learn the association between the physiological measurements and the incidence of pain-inducing events. In some of these trained versions she left out drug information and in some versions she used different statistical approaches (either “linear regression” or “random forest”). In a fifth version of the model, based on a “state space” approach, she left it unsupervised, meaning it had to learn to infer moments of nociception purely from the physiological indices. She compared all five versions of her model to one of the current industry standards, an ECG-tracking model called ANI.

Each model’s output can be visualized as a graph plotting the predicted degree of nociception over time. ANI performs just above chance but is implemented in real-time. The unsupervised model performed better than ANI, though not quite as well as the supervised models. The best performing of those was one that incorporated drug information and used a “random forest” approach. Still, the authors note, the fact that the unsupervised model performed significantly better than chance suggests that there is indeed an objectively detectable signature of the body’s nociceptive state even when looking across different patients.

“A state space framework using multisensory physiological observations is effective in uncovering this implicit nociceptive state with a consistent definition across multiple subjects,” wrote Subramanian, Brown, and their co-authors. “This is an important step toward defining a metric to track nociception without including nociceptive ‘ground truth’ information, most practical for scalability and implementation in clinical settings.”

Indeed, the next steps for the research are to increase the data sampling and to further refine the models so that they can eventually be put into practice in the operating room. That will require enabling them to predict nociception in real time, rather than in post-hoc analysis. When that advance is made, that will enable anesthesiologists or intensivists to inform their pain drug dosing judgements. Further into the future, the model could inform closed-loop systems that automatically dose drugs under the anesthesiologist’s supervision.

“Our study is an important first step toward developing objective markers to track surgical nociception,” the authors concluded. “These markers will enable objective assessment of nociception in other complex clinical settings, such as the ICU [intensive care unit], as well as catalyze future development of closed-loop control systems for nociception.”

In addition to Subramanian and Brown, the paper’s other authors are Bryan Tseng, Marcela del Carmen, Annekathryn Goodman, Douglas Dahl, and Riccardo Barbieri.

Funding from The JPB Foundation; The Picower Institute; George J. Elbaum ’59, SM ’63, PhD ’67; Mimi Jensen; Diane B. Greene SM ’78; Mendel Rosenblum; Bill Swanson; Cathy and Lou Paglia; annual donors to the Anesthesia Initiative Fund; the National Science Foundation; and an MIT Office of Graduate Education Collabmore-Rogers Fellowship supported the research.


AI model can reveal the structures of crystalline materials

By analyzing X-ray crystallography data, the model could help researchers develop new materials for many applications, including batteries and magnets.


For more than 100 years, scientists have been using X-ray crystallography to determine the structure of crystalline materials such as metals, rocks, and ceramics.

This technique works best when the crystal is intact, but in many cases, scientists have only a powdered version of the material, which contains random fragments of the crystal. This makes it more challenging to piece together the overall structure.

MIT chemists have now come up with a new generative AI model that can make it much easier to determine the structures of these powdered crystals. The prediction model could help researchers characterize materials for use in batteries, magnets, and many other applications.

“Structure is the first thing that you need to know for any material. It’s important for superconductivity, it’s important for magnets, it’s important for knowing what photovoltaic you created. It’s important for any application that you can think of which is materials-centric,” says Danna Freedman, the Frederick George Keyes Professor of Chemistry at MIT.

Freedman and Jure Leskovec, a professor of computer science at Stanford University, are the senior authors of the new study, which appears today in the Journal of the American Chemical Society. MIT graduate student Eric Riesel and Yale University undergraduate Tsach Mackey are the lead authors of the paper.

Distinctive patterns

Crystalline materials, which include metals and most other inorganic solid materials, are made of lattices that consist of many identical, repeating units. These units can be thought of as “boxes” with a distinctive shape and size, with atoms arranged precisely within them.

When X-rays are beamed at these lattices, they diffract off atoms with different angles and intensities, revealing information about the positions of the atoms and the bonds between them. Since the early 1900s, this technique has been used to analyze materials, including biological molecules that have a crystalline structure, such as DNA and some proteins.

For materials that exist only as a powdered crystal, solving these structures becomes much more difficult because the fragments don’t carry the full 3D structure of the original crystal.

“The precise lattice still exists, because what we call a powder is really a collection of microcrystals. So, you have the same lattice as a large crystal, but they’re in a fully randomized orientation,” Freedman says.

For thousands of these materials, X-ray diffraction patterns exist but remain unsolved. To try to crack the structures of these materials, Freedman and her colleagues trained a machine-learning model on data from a database called the Materials Project, which contains more than 150,000 materials. First, they fed tens of thousands of these materials into an existing model that can simulate what the X-ray diffraction patterns would look like. Then, they used those patterns to train their AI model, which they call Crystalyze, to predict structures based on the X-ray patterns.

The model breaks the process of predicting structures into several subtasks. First, it determines the size and shape of the lattice “box” and which atoms will go into it. Then, it predicts the arrangement of atoms within the box. For each diffraction pattern, the model generates several possible structures, which can be tested by feeding the structures into a model that determines diffraction patterns for a given structure.

“Our model is generative AI, meaning that it generates something that it hasn’t seen before, and that allows us to generate several different guesses,” Riesel says. “We can make a hundred guesses, and then we can predict what the powder pattern should look like for our guesses. And then if the input looks exactly like the output, then we know we got it right.”

Solving unknown structures

The researchers tested the model on several thousand simulated diffraction patterns from the Materials Project. They also tested it on more than 100 experimental diffraction patterns from the RRUFF database, which contains powdered X-ray diffraction data for nearly 14,000 natural crystalline minerals, that they had held out of the training data. On these data, the model was accurate about 67 percent of the time. Then, they began testing the model on diffraction patterns that hadn’t been solved before. These data came from the Powder Diffraction File, which contains diffraction data for more than 400,000 solved and unsolved materials.

Using their model, the researchers came up with structures for more than 100 of these previously unsolved patterns. They also used their model to discover structures for three materials that Freedman’s lab created by forcing elements that do not react at atmospheric pressure to form compounds under high pressure. This approach can be used to generate new materials that have radically different crystal structures and physical properties, even though their chemical composition is the same.

Graphite and diamond — both made of pure carbon — are examples of such materials. The materials that Freedman has developed, which each contain bismuth and one other element, could be useful in the design of new materials for permanent magnets.

“We found a lot of new materials from existing data, and most importantly, solved three unknown structures from our lab that comprise the first new binary phases of those combinations of elements,” Freedman says.

Being able to determine the structures of powdered crystalline materials could help researchers working in nearly any materials-related field, according to the MIT team, which has posted a web interface for the model at crystalyze.org.

The research was funded by the U.S. Department of Energy and the National Science Foundation.


Improving biology education here, there, and everywhere

At the cutting edge of pedagogy, Mary Ellen Wiltrout has shaped blended and online learning at MIT and beyond.


When she was a child, Mary Ellen Wiltrout PhD ’09 didn’t want to follow in her mother’s footsteps as a K-12 teacher. Growing up in southwestern Pennsylvania, Wiltrout was studious with an early interest in science — and ended up pursuing biology as a career. 

But following her doctorate at MIT, she pivoted toward education after all. Now, as the director of blended and online initiatives and a lecturer with the Department of Biology, she’s shaping biology pedagogy at MIT and beyond.

Establishing MOOCs at MIT

To this day, E.C. Whitehead Professor of Biology and Howard Hughes Medical Institute (HHMI) investigator emeritus Tania Baker considers creating a permanent role for Wiltrout one of the most consequential decisions she made as department head.

Since launching the very first MITxBio massive online open course 7.00x (Introduction to Biology – the Secret of Life) with professor of biology Eric Lander in 2013, Wiltrout’s team has worked with MIT Open Learning and biology faculty to build an award-winning repertoire of MITxBio courses.

MITxBio courses are currently hosted on the learning platform edX, established by MIT and Harvard University in 2012, which today connects 86 million people worldwide to online learning opportunities. Within MITxBio, Wiltrout leads a team of instructional staff and students to develop online learning experiences for MIT students and the public while researching effective methods for learner engagement and course design.

“Mary Ellen’s approach has an element of experimentation that embodies a very MIT ethos: applying rigorous science to creatively address challenges with far-reaching impact,” says Darcy Gordon, instructor of blended and online initiatives.

Mentee to motivator

Wiltrout was inspired to pursue both teaching and research by the late geneticist Elizabeth “Beth” Jones at Carnegie Mellon University, where Wiltrout earned a degree in biological sciences and served as a teaching assistant in lab courses.

“I thought it was a lot of fun to work with students, especially at the higher level of education, and especially with a focus on biology,” Wiltrout recalls, noting she developed her love of teaching in those early experiences.

Though her research advisor at the time discouraged her from teaching, Jones assured Wiltrout that it was possible to pursue both.

Jones, who received her postdoctoral training with late Professor Emeritus Boris Magasanik at MIT, encouraged Wiltrout to apply to the Institute and join American Cancer Society and HHMI Professor Graham Walker’s lab. In 2009, Wiltrout earned a PhD in biology for thesis work in the Walker lab, where she continued to learn from enthusiastic mentors.

“When I joined Graham’s lab, everyone was eager to teach and support a new student,” she reflects. After watching Walker aid a struggling student, Wiltrout was further affirmed in her choice. “I knew I could go to Graham if I ever needed to.”

After graduation, Wiltrout taught molecular biology at Harvard for a few years until Baker facilitated her move back to MIT. Now, she’s a resource for faculty, postdocs, and students.

“She is an incredibly rich source of knowledge for everything from how to implement the increasingly complex tools for running a class to the best practices for ensuring a rigorous and inclusive curriculum,” says Iain Cheeseman, the Herman and Margaret Sokol Professor of Biology and associate head of the biology department.

Stephen Bell, the Uncas and Helen Whitaker Professor of Biology and instructor of the Molecular Biology series of MITxBio courses, notes Wiltrout is known for staying on the “cutting edge of pedagogy.”

“She has a comprehensive knowledge of new online educational tools and is always ready to help any professor to implement them in any way they wish,” he says.

Gordon finds Wiltrout’s experiences as a biologist and learning engineer instrumental to her own professional development and a model for their colleagues in science education.

“Mary Ellen has been an incredibly supportive supervisor. She facilitates a team environment that centers on frequent feedback and iteration,” says Tyler Smith, instructor for pedagogy training and biology.

Prepared for the pandemic, and beyond

Wiltrout believes blended learning, combining in-person and online components, is the best path forward for education at MIT. Building personal relationships in the classroom is critical, but online material and supplemental instruction are also key to providing immediate feedback, formative assessments, and other evidence-based learning practices.

“A lot of people have realized that they can’t ignore online learning anymore,” Wiltrout noted during an interview on The Champions Coffee Podcast in 2023. That couldn’t have been truer than in 2020, when academic institutions were forced to suddenly shift to virtual learning.

“When Covid hit, we already had all the infrastructure in place,” Baker says. “Mary Ellen helped not just our department, but also contributed to MIT education’s survival through the pandemic.”

For Wiltrout’s efforts, she received a COVID-19 Hero Award, a recognition from the School of Science for staff members who went above and beyond during that extraordinarily difficult time.

“Mary Ellen thinks deeply about how to create the best learning opportunities possible,” says Cheeseman, one of almost a dozen faculty members who nominated her for the award.

Recently, Wiltrout expanded beyond higher education and into high schools, taking on several interns in collaboration with Empowr, a nonprofit organization that teaches software development skills to Black students to create a school-to-career pipeline. Wiltrout is proud to report that one of these interns is now a student at MIT in the class of 2028.

Looking forward, Wiltrout aims to stay ahead of the curve with the latest educational technology and is excited to see how modern tools can be incorporated into education.

“Everyone is pretty certain that generative AI is going to change education,” she says. “We need to be experimenting with how to take advantage of technology to improve learning.”

Ultimately, she is grateful to continue developing her career at MIT biology.

“It’s exciting to come back to the department after being a student and to work with people as colleagues to produce something that has an impact on what they’re teaching current MIT students and sharing with the world for further reach,” she says.

As for Wiltrout’s own daughter, she’s declared she would like to follow in her mother’s footsteps — a fitting symbol of Wiltrout’s impact on the future of education.


Liftoff: The Climate Project at MIT takes flight

The major effort to accelerate practical climate change solutions launches as its mission directors meet the Institute community.


The leaders of The Climate Project at MIT met with community members at a campus forum on Monday, helping to kick off the Institute’s major new effort to accelerate and scale up climate change solutions.

“The Climate Project is a whole-of-MIT mobilization,” MIT President Sally Kornbluth said in her opening remarks. “It’s designed to focus the Institute’s talent and resources so that we can achieve much more, faster, in terms of real-world impact, from mitigation to adaptation.”

The event, “Climate Project at MIT: Launching the Missions,” drew a capacity crowd to MIT’s Samberg Center.

While the Climate Project has a number of facets, a central component of the effort consists of its six “missions,” broad areas where MIT researchers will seek to identify gaps in the global climate response that MIT can help fill, and then launch and execute research and innovation projects aimed at those areas. Each mission is led by campus faculty, and Monday’s event represented the first public conversation between the mission directors and the larger campus community.

“Today’s event is an important milestone,” said Richard Lester, MIT’s interim vice president for climate and the Japan Steel Industry Professor of Nuclear Science and Engineering, who led the Climate Project’s formation. He praised Kornbluth’s sustained focus on climate change as a leading priority for MIT.

“The reason we’re all here is because of her leadership and vision for MIT,” Lester said. “We’re also here because the MIT community — our faculty, our staff, our students — has made it abundantly clear that it wants to do more, much more, to help solve this great problem.”

The mission directors themselves emphasized the need for deep community involvement in the project — and that the Climate Project is designed to facilitate researcher-driven enterprise across campus.

“There’s a tremendous amount of urgency,” said Elsa Olivetti PhD ’07, director of the Decarbonizing Energy and Industry mission, during an onstage discussion. “We all need to do everything we can, and roll up our sleeves and get it done.” Olivetti, the Jerry McAfee Professor in Engineering, has been a professor of materials science and engineering at the Institute since 2014.

“What’s exciting about this is the chance of MIT really meeting its potential,” said Jesse Kroll, co-director of the mission for Restoring the Atmosphere, Protecting the Land and Oceans. Kroll is the Peter de Florez Professor in MIT’s Department of Civil and Environmental Engineering, a professor of chemical engineering, and the director of the Ralph M. Parsons Laboratory.

MIT, Kroll noted, features “so much amazing work going on in all these different aspects of the problem. Science, engineering, social science … we put it all together and there is huge potential, a huge opportunity for us to make a difference.”

MIT has pledged an initial $75 million to the Climate Project, including $25 million from the MIT Sloan School of Management for a complementary effort, the MIT Climate Policy Center. However, the Institute is anticipating that it will also build new connections with outside partners, whose role in implementing and scaling Climate Project solutions will be critical.

Monday’s event included a keynote talk from Brian Deese, currently the MIT Innovation and Climate Impact Fellow and the former director of the White House National Economic Council in the Biden administration.

“The magnitude of the risks associated with climate change are extraordinary,” Deese said. However, he added, “these are solvable issues. In fact, the energy transition globally will be the greatest economic opportunity in human history. … It has the potential to actually lift people out of poverty, it has the potential to drive international cooperation, it has the potential to drive innovation and improve lives — if we get this right.”

Deese’s remarks centered on a call for the U.S. to develop a current-day climate equivalent of the Marshall Plan, the U.S. initiative to provide aid to Western Europe after World War II. He also suggested three characteristics of successful climate projects, noting that many would be interdisciplinary in nature and would “engage with policy early in the design process” to become feasible.

In addition to those features, Deese said, people need to “start and end with very high ambition” when working on climate solutions. He added: “The good thing about MIT and our community is that we, you, have done this before. We’ve got examples where MIT has taken something that seemed completely improbable and made it possible, and I believe that part of what is required of this collective effort is to keep that kind of audacious thinking at the top of our mind.”

The MIT mission directors all participated in an onstage discussion moderated by Somini Sengupta, the international climate reporter on the climate team of The New York Times. Sengupta asked the group about a wide range of topics, from their roles and motivations to the political constraints on global climate progress, and more.

Andrew Babbin, co-director of the mission for Restoring the Atmosphere, Protecting the Land and Oceans, defined part of the task of the MIT missions as identifying where those gaps of knowledge are and filling them rapidly,” something he believes is “largely not doable in the conventional way,” based on small-scale research projects. Instead, suggested Babbin, who is the Cecil and Ida Green Career Development Professor in MIT’s Program in Atmospheres, Oceans, and Climate, the collective input of research and innovation communities could help zero in on undervalued approaches to climate action.

Some innovative concepts, the mission directors noted, can be tried out on the MIT campus, in an effort to demonstrate how a more sustainable infrastructure and systems can operate at scale.

“That is absolutely crucial,” said Christoph Reinhart, director of the Building and Adapting Healthy, Resilient Cities mission, expressing the need to have the campus reach net-zero emissions. Reinhart is the Alan and Terri Spoon Professor of Architecture and Climate and director of MIT’s Building Technology Program in the School of Architecture and Planning.

In response to queries from Sengupta, the mission directors affirmed that the Climate Project needs to develop solutions that can work in different societies around the world, while acknowledging that there are many political hurdles to worldwide climate action.

“Any kind of quality engaged projects that we’ve done with communities, it’s taken years to build trust. … How you scale that without compromising is the challenge I’m faced with,” said Miho Mazereeuw, director of the Empowering Frontline Communities mission, an associate professor of architecture and urbanism, and director of MIT’s Urban Risk Lab.

“I think we will impact different communities in different parts of the world in different ways,” said Benedetto Marelli, an associate professor in MIT’s Department of Civil and Environmental Engineering, adding that it would be important to “work with local communities [and] engage stakeholders, and at the same time, use local brains to solve the problem.” The mission he directs, Wild Cards, is centered on identifying unconventional solutions that are high risk and also high reward.

Any climate program “has to be politically feasible, it has to be in separate nations’ self-interest,” said Christopher Knittel, mission director for Inventing New Policy Approaches. In an ever-shifting political world, he added, that means people must “think about not just the policy but the resiliency of the policy.” Knittel is the George P. Shultz Professor and professor of applied economics at the MIT Sloan School of Management, director of the MIT Climate Policy Center, and associate dean for Climate and Sustainability.

In all, MIT has more than 300 faculty and senior researchers who, along with their students and staff, are already working on climate issues.

Kornbluth, for her part, referred to MIT’s first-year students while discussing the larger motivations for taking concerted action to address the challenges of climate change. It might be easy for younger people to despair over the world’s climate trajectory, she noted, but the best response to that includes seeking new avenues for climate progress.

“I understand their anxiety and concern,” Kornbluth said. “But I have no doubt at all that together, we can make a difference. I believe that we have a special obligation to the new students and their entire generation to do everything we can to create a positive change. The most powerful antidote to defeat and despair is collective action.”


Bridging the heavens and Earth

EAPS PhD student Jared Bryan found a way to use his research on earthquakes to help understand exoplanet migration.


When Jared Bryan talks about his seismology research, it’s with a natural finesse. He’s a fifth-year PhD student working with MIT Assistant Professor William Frank on seismology research, drawn in by the lab’s combination of GPS observations, satellites, and seismic station data to understand the underlying physics of earthquakes. He has no trouble talking about seismic velocity in fault zones or how he first became interested in the field after summer internships with the Southern California Earthquake Center as an undergraduate student.

“It’s definitely like a more down-to-earth kind of seismology,” he jokingly describes it. It’s an odd comment. Where else could earthquakes be but on Earth? But it’s because Bryan finished a research project that has culminated in a new paper — published today in Nature Astronomy — involving seismic activity not on Earth, but on stars.

Building curiosity

PhD students in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS) are required to complete two research projects as part of their general exam. The first is often in their main focus of research and the foundations of what will become their thesis work.

But the second project has a special requirement: It must be in a different specialty.

“Having that built into the structure of the PhD is really, really nice,” says Bryan, who hadn’t known about the special requirement when he decided to come to EAPS. “I think it helps you build curiosity and find what's interesting about what other people are doing.”

Having so many different, yet still related, fields of study housed in one department makes it easier for students with a strong sense of curiosity to explore the interconnected interactions of Earth science.

“I think everyone here is excited about a lot of different stuff, but we can’t do everything,” says Frank, the Victor P. Starr Career Development Professor of Geophysics. “This is a great way to get students to try something else that they maybe would have wanted to do in a parallel dimension, interact with other advisors, and see that science can be done in different ways.”

At first, Bryan was worried that the nature of the second project would be a restrictive diversion from his main PhD research. But Associate Professor Julien de Wit was looking for someone with a seismology background to look at some stellar observations he’d collected back in 2016. A star’s brightness was pulsating at a very specific frequency that had to be caused by changes in the star itself, so Bryan decided to help.

“I was surprised by how the kind of seismology that he was looking for was similar to the seismology that we were first doing in the ’60s and ’70s, like large-scale global Earth seismology,” says Bryan. “I thought it would be a way to rethink the foundations of the field that I had been studying applied to a new region.”

Going from earthquakes to starquakes is not a one-to-one comparison. While the foundational knowledge was there, movement of stars comes from a variety of sources like magnetism or the Coriolis effect, and in a variety of forms. In addition to the sound and pressure waves of earthquakes, they also have gravity waves, all of which happen on a scale much more massive.

“You have to stretch your mind a bit, because you can’t actually visit these places,” Bryan says. “It’s an unbelievable luxury that we have in Earth seismology that the things that we study are on Google Maps.”

But there are benefits to bringing in scientists from outside an area of expertise. De Wit, who served as Bryan’s supervisor for the project and is also an author on the paper, points out that they bring a fresh perspective and approach by asking unique questions.

“Things that people in the field would just take for granted are challenged by their questions,” he says, adding that Bryan was transparent about what he did and didn’t know, allowing for a rich exchange of information.

Tidal resonance locking

Bryan eventually found that the changes in the star’s brightness were caused by tidal resonance. Resonance is a physical occurrence where waves interact and amplify each other. The most common analogy is pushing someone on a swing set; when the person pushing does it at just the right time, it helps the person on the swing go higher.

“Tidal resonance is where you’re pushing at exactly the same frequency as they’re swinging, and the locking happens when both of those frequencies are changing,” Bryan explains. The person pushing the swing gets tired and pushes less often, while the chain of the swing change length. (Bryan jokes that here the analogy starts to break down.)

As a star changes over the course of its lifetime, tidal resonance locking can cause hot Jupiters, which are massive exoplanets that orbit very close to their host stars, to change orbital distances. This wandering migration, as they call it, explains how some hot Jupiters get so close to their host stars. They also found that the path they take to get there is not always smooth. It can speed up, slow down, or even regress.

An important implication from the paper is that tidal resonance locking could be used as an exoplanet detection tool, confirming de Wit’s hypothesis from the original 2016 observation that the pulsations had the potential to be used in such a way. If changes in the star’s brightness can be linked to this resonance locking, it may indicate planets that can’t be detected using current methods.

As below, so above

Most EAPS PhD students don’t advance their project beyond the requirements for the general exam, let alone get a paper out of it. At first, Bryan worried that continuing with it would end up being a distraction from his main work, but ultimately was glad that he committed to it and was able to contribute something meaningful to the emerging field of asteroseismology.

“I think it’s evidence that Jared is excited about what he does and has the drive and scientific skepticism to have done the extra steps to make sure that what he was doing was a real contribution to the scientific literature,” says Frank. “He’s a great example of success and what we hope for our students.”

While de Wit didn’t manage to convince Bryan to switch to exoplanet research permanently, he is “excited that there is the opportunity to keep on working together.”

Once he finishes his PhD, Bryan plans on continuing in academia as a professor running a research lab, shifting his focus onto volcano seismology and improving instrumentation for the field. He’s open to the possibility of taking his findings on Earth and applying them to volcanoes on other planetary bodies, such as those found on Venus and Jupiter’s moon Io.

“I’d like to be the bridge between those two things,” he says.


MIT OpenCourseWare sparks the joy of deep understanding

With the help of MIT’s online resources, Doğa Kürkçüoğlu, now a staff scientist at Fermilab, was able to pursue his passion for physics.


From a young age, Doğa Kürkçüoğlu heard his father, a math teacher, say that learning should be about understanding and real-world applications rather than memorization. But it wasn’t until he began exploring MIT OpenCourseWare in 2004 that Kürkçüoğlu experienced what it means to truly understand complex subject matter.

“MIT professors showed me how to look at a concept from different angles that I hadn’t before, and that helped me internalize information,” says Kürkçüoğlu, who turned to MIT OpenCourseWare to supplement what he was learning as an undergraduate studying physics. “Once I understood techniques and concepts, I was able to apply them in different disciplines. Even now, there are many equations I don’t have memorized exactly, but because I understand the underlying ideas, I can derive them myself in just a few minutes.”

Though there was a point in his life when friends and classmates thought he might pursue music, Kürkçüoğlu — a skilled violinist who currently plays in a jazz band on the side — always had a passion for math and physics and was determined to learn everything he could to pursue the career he imagined for himself.

“Even when I was 4 or 5 years old, if someone asked me, ‘what do you want to be when you grow up?’ I would say a scientist or mathematician,” says Kürkçüoğlu, who is now a staff scientist at Fermilab in the Superconducting Quantum Materials and Systems Center. Fermilab is the U.S. Department of Energy laboratory for particle physics and accelerator research. “I feel lucky that I actually get to do the job I imagined as a little kid,” Kürkçüoğlu says.

OpenCourseWare and other resources from MIT Open Learning — including courses, lectures, written guides, and problem sets — played an important role in Kürkçüoğlu’s learning journey and career. He turned to these open educational resources throughout his undergraduate studies at Marmara University in Turkey. When he completed his degree in 2008, Kürkçüoğlu set his sights on a PhD. He says he felt ready to dive right into doctoral-level research thanks to so many MIT OpenCourseWare lectures, courses, and study guides. He started a PhD program at Georgia Tech, where his research focused on theoretical condensed matter physics with ultra-cold atoms.

“Without OpenCourseWare, I could not have done that,” he says, adding that he considers himself “an honorary MIT graduate.”

Memorable courses include particle physics with Iain W. Stewart, the Otto (1939) and Jane Morningstar Professorship in Science Professor of Physics and director of the Center for Theoretical Physics; and Statistical Mechanics of Fields with Mehran Kardar, professor of physics. Learning from Kardar felt especially apt, because Kürkçüoğlu’s undergraduate advisor, Nihat Berker, was Kardar’s PhD advisor. Berker is also emeritus professor of physics at MIT.

Once he completed his PhD in 2015, Kürkçüoğlu spent time as an assistant professor at Georgia Southern University and a postdoc at Los Alamos National Laboratory. He joined Fermilab in 2020. There, he works on quantum theory and quantum algorithms. He enjoys the research-focused atmosphere of a national laboratory, where teams of scientists are working toward tangible goals.

When he was teaching, though, he encouraged his students to check out Open Learning resources.

“I would tell them, first of all, to have fun. Learning should be fun — another idea that my father always encouraged as a math teacher. With OpenCourseWare, you can get a new perspective on something you already know about, or open a course that can expand your horizons,” Kürkçüoğlu says. “Depending on where you start, it might take you an hour, a week, or a month to fully understand something. Once you understand, it’s yours. It is a different kind of joy to actually, truly understand.”


A wobble from Mars could be sign of dark matter, MIT study finds

Watching for changes in the Red Planet’s orbit over time could be new way to detect passing dark matter.


In a new study, MIT physicists propose that if most of the dark matter in the universe is made up of microscopic primordial black holes — an idea first proposed in the 1970s — then these gravitational dwarfs should zoom through our solar system at least once per decade. A flyby like this, the researchers predict, would introduce a wobble into Mars’ orbit, to a degree that today’s technology could actually detect.

Such a detection could lend support to the idea that primordial black holes are a primary source of dark matter throughout the universe.

“Given decades of precision telemetry, scientists know the distance between Earth and Mars to an accuracy of about 10 centimeters,” says study author David Kaiser, professor of physics and the Germeshausen Professor of the History of Science at MIT. “We’re taking advantage of this highly instrumented region of space to try and look for a small effect. If we see it, that would count as a real reason to keep pursuing this delightful idea that all of dark matter consists of black holes that were spawned in less than a second after the Big Bang and have been streaming around the universe for 14 billion years.”

Kaiser and his colleagues report their findings today in the journal Physical Review D. The study’s co-authors are lead author Tung Tran ’24, who is now a graduate student at Stanford University; Sarah Geller ’12, SM ’17, PhD ’23, who is now a postdoc at the University of California at Santa Cruz; and MIT Pappalardo Fellow Benjamin Lehmann.

Beyond particles

Less than 20 percent of all physical matter is made from visible stuff, from stars and planets, to the kitchen sink. The rest is composed of dark matter, a hypothetical form of matter that is invisible across the entire electromagnetic spectrum yet is thought to pervade the universe and exert a gravitational force large enough to affect the motion of stars and galaxies.

Physicists have erected detectors on Earth to try and spot dark matter and pin down its properties. For the most part, these experiments assume that dark matter exists as a form of exotic particle that might scatter and decay into observable particles as it passes through a given experiment. But so far, such particle-based searches have come up empty.

In recent years, another possibility, first introduced in the 1970s, has regained traction: Rather than taking on a particle form, dark matter could exist as microscopic, primordial black holes that formed in the first moments following the Big Bang. Unlike the astrophysical black holes that form from the collapse of old stars, primordial black holes would have formed from the collapse of dense pockets of gas in the very early universe and would have scattered across the cosmos as the universe expanded and cooled.

These primordial black holes would have collapsed an enormous amount of mass into a tiny space. The majority of these primordial black holes could be as small as a single atom and as heavy as the largest asteroids. It would be conceivable, then, that such tiny giants could exert a gravitational force that could explain at least a portion of dark matter. For the MIT team, this possibility raised an initially frivolous question.

“I think someone asked me what would happen if a primordial black hole passed through a human body,” recalls Tung, who did a quick pencil-and-paper calculation to find that if such a black hole zinged within 1 meter of a person, the force of the black hole would push the person 6 meters, or about 20 feet away in a single second. Tung also found that the odds were astronomically unlikely that a primordial black hole would pass anywhere near a person on Earth.

Their interest piqued, the researchers took Tung’s calculations a step further, to estimate how a black hole flyby might affect much larger bodies such as the Earth and the moon.

“We extrapolated to see what would happen if a black hole flew by Earth and caused the moon to wobble by a little bit,” Tung says. “The numbers we got were not very clear. There are many other dynamics in the solar system that could act as some sort of friction to cause the wobble to dampen out.”

Close encounters

To get a clearer picture, the team generated a relatively simple simulation of the solar system that incorporates the orbits and gravitational interactions between all the planets, and some of the largest moons.

“State-of-the-art simulations of the solar system include more than a million objects, each of which has a tiny residual effect,” Lehmann notes. “But even modeling two dozen objects in a careful simulation, we could see there was a real effect that we could dig into.”

The team worked out the rate at which a primordial black hole should pass through the solar system, based on the amount of dark matter that is estimated to reside in a given region of space and the mass of a passing black hole, which in this case, they assumed to be as massive as the largest asteroids in the solar system, consistent with other astrophysical constraints.

“Primordial black holes do not live in the solar system. Rather, they’re streaming through the universe, doing their own thing,” says co-author Sarah Geller. “And the probability is, they’re going through the inner solar system at some angle once every 10 years or so.”

Given this rate, the researchers simulated various asteroid-mass black holes flying through the solar system, from various angles, and at velocities of about 150 miles per second. (The directions and speeds come from other studies of the distribution of dark matter throughout our galaxy.) They zeroed in on those flybys that appeared to be “close encounters,” or instances that caused some sort of effect in surrounding objects. They quickly found that any effect in the Earth or the moon was too uncertain to pin to a particular black hole. But Mars seemed to offer a clearer picture.

The researchers found that if a primordial black hole were to pass within a few hundred million miles of Mars, the encounter would set off a “wobble,” or a slight deviation in Mars’ orbit. Within a few years of such an encounter, Mars’ orbit should shift by about a meter — an incredibly small wobble, given the planet is more than 140 million miles from Earth. And yet, this wobble could be detected by the various high-precision instruments that are monitoring Mars today.

If such a wobble were detected in the next couple of decades, the researchers acknowledge there would still be much work needed to confirm that the push came from a passing black hole rather than a run-of-the-mill asteroid.

“We need as much clarity as we can of the expected backgrounds, such as the typical speeds and distributions of boring space rocks, versus these primordial black holes,” Kaiser notes. “Luckily for us, astronomers have been tracking ordinary space rocks for decades as they have flown through our solar system, so we could calculate typical properties of their trajectories and begin to compare them with the very different types of paths and speeds that primordial black holes should follow.”

To help with this, the researchers are exploring the possibility of a new collaboration with a group that has extensive expertise simulating many more objects in the solar system.

“We are now working to simulate a huge number of objects, from planets to moons and rocks, and how they’re all moving over long time scales,” Geller says. “We want to inject close encounter scenarios, and look at their effects with higher precision.”

“It’s a very neat test they’ve proposed, and it could tell us if the closest black hole is closer than we realize,” says Matt Caplan, associate professor of physics at Illinois State University, who was not involved in the study. “I should emphasize there’s a little bit of luck involved too. Whether or not a search finds a loud and clear signal depends on the exact path a wandering black hole takes through the solar system. Now that they’ve checked this idea with simulations, they have to do the hard part — checking the real data.”

This work was supported in part by the U.S. Department of Energy and the U.S. National Science Foundation, which includes an NSF Mathematical and Physical Sciences postdoctoral fellowship.


Finding some stability in adaptable brains

New research suggests neurons protect and preserve certain information through a dedicated zone of stable synapses.


One of the brain’s most celebrated qualities is its adaptability. Changes to neural circuits, whose connections are continually adjusted as we experience and interact with the world, are key to how we learn. But to keep knowledge and memories intact, some parts of the circuitry must be resistant to this constant change.

“Brains have figured out how to navigate this landscape of balancing between stability and flexibility, so that you can have new learning and you can have lifelong memory,” says neuroscientist Mark Harnett, an investigator at MIT’s McGovern Institute for Brain Research. In the Aug. 27 issue of the journal Cell Reports, Harnett and his team show how individual neurons can contribute to both parts of this vital duality. By studying the synapses through which pyramidal neurons in the brain’s sensory cortex communicate, they have learned how the cells preserve their understanding of some of the world’s most fundamental features, while also maintaining the flexibility they need to adapt to a changing world.

Visual connections

Pyramidal neurons receive input from other neurons via thousands of connection points. Early in life, these synapses are extremely malleable; their strength can shift as a young animal takes in visual information and learns to interpret it. Most remain adaptable into adulthood, but Harnett’s team discovered that some of the cells’ synapses lose their flexibility when the animals are less than a month old. Having both stable and flexible synapses means these neurons can combine input from different sources to use visual information in flexible ways.

Postdoc Courtney Yaeger took a close look at these unusually stable synapses, which cluster together along a narrow region of the elaborately branched pyramidal cells. She was interested in the connections through which the cells receive primary visual information, so she traced their connections with neurons in a vision-processing center of the brain’s thalamus called the dorsal lateral geniculate nucleus (dLGN).

The long extensions through which a neuron receives signals from other cells are called dendrites, and they branch of from the main body of the cell into a tree-like structure. Spiny protrusions along the dendrites form the synapses that connect pyramidal neurons to other cells. Yaeger’s experiments showed that connections from the dLGN all led to a defined region of the pyramidal cells — a tight band within what she describes as the trunk of the dendritic tree.

Yaeger found several ways in which synapses in this region — formally known as the apical oblique dendrite domain — differ from other synapses on the same cells. “They’re not actually that far away from each other, but they have completely different properties,” she says.

Stable synapses

In one set of experiments, Yaeger activated synapses on the pyramidal neurons and measured the effect on the cells’ electrical potential. Changes to a neuron’s electrical potential generate the impulses the cells use to communicate with one another. It is common for a synapse’s electrical effects to amplify when synapses nearby are also activated. But when signals were delivered to the apical oblique dendrite domain, each one had the same effect, no matter how many synapses were stimulated. Synapses there don’t interact with one another at all, Harnett says. “They just do what they do. No matter what their neighbors are doing, they all just do kind of the same thing.”

The team was also able to visualize the molecular contents of individual synapses. This revealed a surprising lack of a certain kind of neurotransmitter receptor, called NMDA receptors, in the apical oblique dendrites. That was notable because of NMDA receptors’ role in mediating changes in the brain. “Generally when we think about any kind of learning and memory and plasticity, it’s NMDA receptors that do it,” Harnett says. “That is the by far most common substrate of learning and memory in all brains.”

When Yaeger stimulated the apical oblique synapses with electricity, generating patterns of activity that would strengthen most synapses, the team discovered a consequence of the limited presence of NMDA receptors. The synapses’ strength did not change. “There’s no activity-dependent plasticity going on there, as far as we have tested,” Yaeger says.

That makes sense, the researchers say, because the cells’ connections from the thalamus relay primary visual information detected by the eyes. It is through these connections that the brain learns to recognize basic visual features like shapes and lines.

“These synapses are basically a robust, high-fidelity readout of this visual information,” Harnett explains. “That’s what they’re conveying, and it’s not context-sensitive. So it doesn’t matter how many other synapses are active, they just do exactly what they’re going to do, and you can’t modify them up and down based on activity. So they’re very, very stable.”

“You actually don’t want those to be plastic,” adds Yaeger. "Can you imagine going to sleep and then forgetting what a vertical line looks like? That would be disastrous.” 

By conducting the same experiments in mice of different ages, the researchers determined that the synapses that connect pyramidal neurons to the thalamus become stable a few weeks after young mice first open their eyes. By that point, Harnett says, they have learned everything they need to learn. On the other hand, if mice spend the first weeks of their lives in the dark, the synapses never stabilize — further evidence that the transition depends on visual experience.

The team’s findings not only help explain how the brain balances flexibility and stability; they could help researchers teach artificial intelligence how to do the same thing. Harnett says artificial neural networks are notoriously bad at this: when an artificial neural network that does something well is trained to do something new, it almost always experiences “catastrophic forgetting” and can no longer perform its original task. Harnett’s team is exploring how they can use what they’ve learned about real brains to overcome this problem in artificial networks.


A new way to reprogram immune cells and direct them toward anti-tumor immunity

MIT scientists’ discovery yields a potent immune response, could be used to develop a potential tumor vaccine.


A collaboration between four MIT groups, led by principal investigators Laura L. KiesslingJeremiah A. JohnsonAlex K. Shalek, and Darrell J. Irvine, in conjunction with a group at Georgia Tech led by M.G. Finn, has revealed a new strategy for enabling immune system mobilization against cancer cells. The work, which appears today in ACS Nano, produces exactly the type of anti-tumor immunity needed to function as a tumor vaccine — both prophylactically and therapeutically.

Cancer cells can look very similar to the human cells from which they are derived. In contrast, viruses, bacteria, and fungi carry carbohydrates on their surfaces that are markedly different from those of human carbohydrates. Dendritic cells — the immune system’s best antigen-presenting cells — carry proteins on their surfaces that help them recognize these atypical carbohydrates and bring those antigens inside of them. The antigens are then processed into smaller peptides and presented to the immune system for a response. Intriguingly, some of these carbohydrate proteins can also collaborate to direct immune responses. This work presents a strategy for targeting those antigens to the dendritic cells that results in a more activated, stronger immune response.

Tackling tumors’ tenacity

The researchers’ new strategy shrouds the tumor antigens with foreign carbohydrates and co-delivers them with single-stranded RNA so that the dendritic cells can be programmed to recognize the tumor antigens as a potential threat. The researchers targeted the lectin (carbohydrate-binding protein) DC-SIGN because of its ability to serve as an activator of dendritic cell immunity. They decorated a virus-like particle (a particle composed of virus proteins assembled onto a piece of RNA that is noninfectious because its internal RNA is not from the virus) with DC-binding carbohydrate derivatives. The resulting glycan-costumed virus-like particles display unique sugars; therefore, the dendritic cells recognize them as something they need to attack.

“On the surface of the dendritic cells are carbohydrate binding proteins called lectins that combine to the sugars on the surface of bacteria or viruses, and when they do that they penetrate the membrane,” explains Kiessling, the paper’s senior author. “On the cell, the DC-SIGN gets clustered upon binding the virus or bacteria and that promotes internalization. When a virus-like particle gets internalized, it starts to fall apart and releases its RNA.” The toll-like receptor (bound to RNA) and DC-SIGN (bound to the sugar decoration) can both signal to activate the immune response.

Once the dendritic cells have sounded the alarm of a foreign invasion, a robust immune response is triggered that is significantly stronger than the immune response that would be expected with a typical untargeted vaccine. When an antigen is encountered by the dendritic cells, they send signals to T cells, the next cell in the immune system, to give different responses depending on what pathways have been activated in the dendritic cells.

Advancing cancer vaccine development

The activity of a potential vaccine developed in line with this new research is twofold. First, the vaccine glycan coat binds to lectins, providing a primary signal. Then, binding to toll-like receptors elicits potent immune activation.

The Kiessling, Finn, and Johnson groups had previously identified a synthetic DC-SIGN binding group that directed cellular immune responses when used to decorate virus-like particles. But it was unclear whether this method could be utilized as an anticancer vaccine. Collaboration between researchers in the labs at MIT and Georgia Tech demonstrated that in fact, it could.

Valerie Lensch, a chemistry PhD student from MIT’s Program in Polymers and Soft Matter and a joint member of the Kiessling and Johnson labs, took the preexisting strategy and tested it as an anticancer vaccine, learning a great deal about immunology in order to do so.

“We have developed a modular vaccine platform designed to drive antigen-specific cellular immune responses,” says Lensch. “This platform is not only pivotal in the fight against cancer, but also offers significant potential for combating challenging intracellular pathogens, including malaria parasites, HIV, and Mycobacterium tuberculosis. This technology holds promise for tackling a range of diseases where vaccine development has been particularly challenging.”

Lensch and her fellow researchers conducted in vitro experiments with extensive iterations of these glycan-costumed virus-like particles before identifying a design that demonstrated potential for success. Once that was achieved, the researchers were able to move on to an in vivo model, an exciting milestone for their research.

Adele Gabba, a postdoc in the Kiessling Lab, conducted the in vivo experiments with Lensch, and Robert Hincapie, who conducted his PhD studies with Professor M.G. Finn at Georgia Tech, built and decorated the virus-like particles with a series of glycans that were sent to him from the researchers at MIT.

“We are discovering that carbohydrates act like a language that cells use to communicate and direct the immune system,” says Gabba. “It's thrilling that we have begun to decode this language and can now harness it to reshape immune responses.”

“The design principles behind this vaccine are rooted in extensive fundamental research conducted by previous graduate student and postdoctoral researchers over many years, focusing on optimizing lectin engagement and understanding the roles of lectins in immunity,” says Lensch. “It has been exciting to witness the translation of these concepts into therapeutic platforms across various applications.”


Study: Early dark energy could resolve cosmology’s two biggest puzzles

In the universe’s first billion years, this brief and mysterious force could have produced more bright galaxies than theory predicts.


A new study by MIT physicists proposes that a mysterious force known as early dark energy could solve two of the biggest puzzles in cosmology and fill in some major gaps in our understanding of how the early universe evolved.

One puzzle in question is the “Hubble tension,” which refers to a mismatch in measurements of how fast the universe is expanding. The other involves observations of numerous early, bright galaxies that existed at a time when the early universe should have been much less populated.

Now, the MIT team has found that both puzzles could be resolved if the early universe had one extra, fleeting ingredient: early dark energy. Dark energy is an unknown form of energy that physicists suspect is driving the expansion of the universe today. Early dark energy is a similar, hypothetical phenomenon that may have made only a brief appearance, influencing the expansion of the universe in its first moments before disappearing entirely.

Some physicists have suspected that early dark energy could be the key to solving the Hubble tension, as the mysterious force could accelerate the early expansion of the universe by an amount that would resolve the measurement mismatch.

The MIT researchers have now found that early dark energy could also explain the baffling number of bright galaxies that astronomers have observed in the early universe. In their new study, reported today in the Monthly Notices of the Royal Astronomical Society, the team modeled the formation of galaxies in the universe’s first few hundred million years. When they incorporated a dark energy component only in that earliest sliver of time, they found the number of galaxies that arose from the primordial environment bloomed to fit astronomers’ observations.

You have these two looming open-ended puzzles,” says study co-author Rohan Naidu, a postdoc in MIT’s Kavli Institute for Astrophysics and Space Research. “We find that in fact, early dark energy is a very elegant and sparse solution to two of the most pressing problems in cosmology.”

The study’s co-authors include lead author and Kavli postdoc Xuejian (Jacob) Shen, and MIT professor of physics Mark Vogelsberger, along with Michael Boylan-Kolchin at the University of Texas at Austin, and Sandro Tacchella at the University of Cambridge.

Big city lights

Based on standard cosmological and galaxy formation models, the universe should have taken its time spinning up the first galaxies. It would have taken billions of years for primordial gas to coalesce into galaxies as large and bright as the Milky Way.

But in 2023, NASA’s James Webb Space Telescope (JWST) made a startling observation. With an ability to peer farther back in time than any observatory to date, the telescope uncovered a surprising number of bright galaxies as large as the modern Milky Way within the first 500 million years, when the universe was just 3 percent of its current age.

“The bright galaxies that JWST saw would be like seeing a clustering of lights around big cities, whereas theory predicts something like the light around more rural settings like Yellowstone National Park,” Shen says. “And we don’t expect that clustering of light so early on.”

For physicists, the observations imply that there is either something fundamentally wrong with the physics underlying the models or a missing ingredient in the early universe that scientists have not accounted for. The MIT team explored the possibility of the latter, and whether the missing ingredient might be early dark energy.

Physicists have proposed that early dark energy is a sort of antigravitational force that is turned on only at very early times. This force would counteract gravity’s inward pull and accelerate the early expansion of the universe, in a way that would resolve the mismatch in measurements. Early dark energy, therefore, is considered the most likely solution to the Hubble tension.

Galaxy skeleton

The MIT team explored whether early dark energy could also be the key to explaining the unexpected population of large, bright galaxies detected by JWST. In their new study, the physicists considered how early dark energy might affect the early structure of the universe that gave rise to the first galaxies. They focused on the formation of dark matter halos — regions of space where gravity happens to be stronger, and where matter begins to accumulate.

“We believe that dark matter halos are the invisible skeleton of the universe,” Shen explains. “Dark matter structures form first, and then galaxies form within these structures. So, we expect the number of bright galaxies should be proportional to the number of big dark matter halos.”

The team developed an empirical framework for early galaxy formation, which predicts the number, luminosity, and size of galaxies that should form in the early universe, given some measures of “cosmological parameters.” Cosmological parameters are the basic ingredients, or mathematical terms, that describe the evolution of the universe.

Physicists have determined that there are at least six main cosmological parameters, one of which is the Hubble constant — a term that describes the universe’s rate of expansion. Other parameters describe density fluctuations in the primordial soup, immediately after the Big Bang, from which dark matter halos eventually form.

The MIT team reasoned that if early dark energy affects the universe’s early expansion rate, in a way that resolves the Hubble tension, then it could affect the balance of the other cosmological parameters, in a way that might increase the number of bright galaxies that appear at early times. To test their theory, they incorporated a model of early dark energy (the same one that happens to resolve the Hubble tension) into an empirical galaxy formation framework to see how the earliest dark matter structures evolve and give rise to the first galaxies.

“What we show is, the skeletal structure of the early universe is altered in a subtle way where the amplitude of fluctuations goes up, and you get bigger halos, and brighter galaxies that are in place at earlier times, more so than in our more vanilla models,” Naidu says. “It means things were more abundant, and more clustered in the early universe.”

“A priori, I would not have expected the abundance of JWST’s early bright galaxies to have anything to do with early dark energy, but their observation that EDE pushes cosmological parameters in a direction that boosts the early-galaxy abundance is interesting,” says Marc Kamionkowski, professor of theoretical physics at Johns Hopkins University, who was not involved with the study. “I think more work will need to be done to establish a link between early galaxies and EDE, but regardless of how things turn out, it’s a clever — and hopefully ultimately fruitful — thing to try.”

We demonstrated the potential of early dark energy as a unified solution to the two major issues faced by cosmology. This might be an evidence for its existence if the observational findings of JWST get further consolidated,” Vogelsberger concludes. “In the future, we can incorporate this into large cosmological simulations to see what detailed predictions we get.”

This research was supported, in part, by NASA and the National Science Foundation.


Harnessing the power of placebo for pain relief

MIT researchers investigate the neural circuits that underlie placebos’ ability to relieve chronic and acute pain.


Placebos are inert treatments, generally not expected to impact biological pathways or improve a person’s physical health. But time and again, some patients report that they feel better after taking a placebo. Increasingly, doctors and scientists are recognizing that rather than dismissing placebos as mere trickery, they may be able to help patients by harnessing their power.

To maximize the impact of the placebo effect and design reliable therapeutic strategies, researchers need a better understanding of how it works. Now, with a new animal model developed by scientists at the McGovern Institute at MIT, they will be able to investigate the neural circuits that underlie placebos’ ability to elicit pain relief.

“The brain and body interaction has a lot of potential, in a way that we don't fully understand,” says Fan Wang, an MIT professor of brain and cognitive sciences and investigator at the McGovern Institute. “I really think there needs to be more of a push to understand placebo effect, in pain and probably in many other conditions. Now we have a strong model to probe the circuit mechanism.”

Context-dependent placebo effect

In the Sept. 5, 2024, issue of the journal Current Biology, Wang and her team report that they have elicited strong placebo pain relief in mice by activating pain-suppressing neurons in the brain while the mice are in a specific environment, thereby teaching the animals that they feel better when they are in that context. Following their training, placing the mice in that environment alone is enough to suppress pain. The team’s experiments — which were funded by the National Institutes of Health, the K. Lisa Yang Brain-Body Center, and the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics within MIT’s Yang Tan Collective — show that this context-dependent placebo effect relieves both acute and chronic pain.

Context is critical for the placebo effect. While a pill can help a patient feel better when they expect it to, even if it is made only of sugar or starch, it seems to be not just the pill that sets up those expectations, but the entire scenario in which the pill is taken. For example, being in a hospital and interacting with doctors can contribute to a patient’s perception of care, and these social and environmental factors can make a placebo effect more probable.

MIT postdocs Bin Chen and Nitsan Goldstein used visual and textural cues to define a specific place. Then they activated pain-suppressing neurons in the brain while the animals were in this “pain-relief box.” Those pain-suppressing neurons, which Wang’s lab discovered a few years ago, are located in an emotion-processing center of the brain called the central amygdala. By expressing light-sensitive channels in these neurons, the researchers were able to suppress pain with light in the pain-relief box and leave the neurons inactive when mice were in a control box.

Animals learned to prefer the pain-relief box to other environments. And when the researchers tested their response to potentially painful stimuli after they had made that association, they found the mice were less sensitive while they were there. “Just by being in the context that they had associated with pain suppression, we saw that reduced pain—even though we weren't actually activating those [pain-suppressing] neurons,” Goldstein explains.

Acute and chronic pain relief

Some scientists have been able to elicit placebo pain relief in rodents by treating the animals with morphine, linking environmental cues to the pain suppression caused by the drugs similar to the way Wang’s team did by directly activating pain-suppressing neurons. This drug-based approach works best for setting up expectations of relief for acute pain; its placebo effect is short-lived and mostly ineffective against chronic pain. So Wang, Chen, and Goldstein were particularly pleased to find that their engineered placebo effect was effective for relieving both acute and chronic pain.

In their experiments, animals experiencing a chemotherapy-induced hypersensitivity to touch exhibited a preference for the pain relief box as much as animals who were exposed to a chemical that induces acute pain, days after their initial conditioning. Once there, their chemotherapy-induced pain sensitivity was eliminated; they exhibited no more sensitivity to painful stimuli than they had prior to receiving chemotherapy.

One of the biggest surprises came when the researchers turned their attention back to the pain-suppressing neurons in the central amygdala that they had used to trigger pain relief. They suspected that those neurons might be reactivated when mice returned to the pain-relief box. Instead, they found that after the initial conditioning period, those neurons remained quiet. “These neurons are not reactivated, yet the mice appear to be no longer in pain,” Wang says. “So it suggests this memory of feeling well is transferred somewhere else.”

Goldstein adds that there must be a pain-suppressing neural circuit somewhere that is activated by pain-relief-associated contexts — and the team’s new placebo model sets researchers up to investigate those pathways. A deeper understanding of that circuitry could enable clinicians to deploy the placebo effect — alone or in combination with active treatments — to better manage patients’ pain in the future.


Tools for making imagination blossom at MIT.nano

New STUDIO.nano supports artistic research and encounters within MIT.nano’s facilities.


The MIT community and visitors have a new reason to drop by MIT.nano: six artworks by Brazilian artist and sculptor Denise Milan. Located in the open-air stairway connecting the first- and second-floor galleries within the nanoscience and engineering facility, the works center around the stone as a microcosm of nature. From Milan’s “Mist of the Earth” series, evocative of mandalas, the project asks viewers to reflect on the environmental changes that result from human-made development.

Milan is the inaugural artist in “Encounters,” a series presented by STUDIO.nano, a new initiative from MIT.nano that encourages the exploration of platforms and pathways at the intersection of technology, science, and art. Encounters welcomes proposals from artists, scientists, engineers, and designers from outside of the MIT community looking to collaborate with MIT.nano researchers, facilities, ongoing projects, and unique spaces.

“Life is in the art of the encounter,” remarked Milan, quoting Brazilian poet Vinicius de Moraes, during a reception at MIT.nano. “And for an artist to be in a place like this, MIT.nano, what could be better? I love the curiosity of scientists. They are very much like artists ... art and science are both tools for making imagination blossom.” What followed was a freewheeling conversation between attendees that spanned topics ranging from the cyclical nature of birth, death, and survival in the cosmos to musings on the elemental sources of creativity and the similarities in artistic and scientific practice to a brief lesson on time crystals by Nobel Prize laureate Frank Wilczek, the Herman Feshbach Professor of Physics at MIT.

Milan was joined in her conversation by MIT.nano Director Vladimir Bulović, the Fariborz Maseeh Professor of Emerging Technologies; Ardalan SadeghiKivi MArch ’22, who moderated the discussion; Samantha Farrell, manager of STUDIO.nano programming; and Naomi Moniz, professor emeritus at Georgetown University, who connected Milan and her work with MIT.nano.

“In addition to the technical community, we [at MIT.nano] have been approached by countless artists and thinkers in the humanities who, to our delight, are eager to learn about the wonders of the nanoscale and how to use the tools of MIT.nano to explore and expand their own artistic practice,” said Bulović.

These interactions have spurred collaborative projects across disciplines, art exhibitions, and even MIT classes. For the past four years MIT.nano has hosted 4.373/4.374 (Creating Art, Thinking Science), an undergraduate and graduate class offered by the Art, Culture, and Technology (ACT) Program. To date, the class has brought 35 students into MIT.nano’s labs and resulted in 40 distinct projects and 60 pieces of art, many of which are on display in MIT.nano’s galleries.

With the launch of STUDIO.nano, MIT.nano will look to expand its exhibition programs, including supporting additional digital media and augmented/virtual reality projects; providing tools and spaces for development of new classes envisioned by MIT academic departments; and introducing programming such as lectures related to the studio's activities.

Milan’s work will be a permanent installation at MIT.nano, where she hopes it will encourage individuals to pursue their creative inspiration, regardless of discipline. “To exist or to disappear?” Milan asked. “If it’s us, an idea, or a dream — the question is how much of an assignment you have with your own imagination.”


No detail too small

For Sarah Sterling, the new director of the Cryo-Electron Microscopy facility at MIT.nano, better planning and more communication leads to better science.


Sarah Sterling, director of the Cryo-Electron Microscopy, or Cryo-EM, core facility, often compares her job to running a small business. Each day brings a unique set of jobs ranging from administrative duties and managing facility users to balancing budgets and maintaining equipment.

Although one could easily be overwhelmed by the seemingly never-ending to-do list, Sterling finds a great deal of joy in wearing so many different hats. One of her most essential tasks involves clear communication with users when the delicate instruments in the facility are unusable because of routine maintenance and repairs.

“Better planning allows for better science,” Sterling says. “Luckily, I’m very comfortable with building and fixing things. Let’s troubleshoot. Let’s take it apart. Let’s put it back together.”

Out of all her duties as a core facility director, she most looks forward to the opportunities to teach, especially helping students develop research projects.

“Undergraduate or early-stage graduate students ask the best questions,” she says. “They’re so curious about the tiny details, and they’re always ready to hit the ground running on their projects.”

A non-linear scientific journey

When Sterling enrolled in Russell Sage College, a women’s college in New York, she was planning to pursue a career as a physical therapist. However, she quickly realized she loved her chemistry classes more than her other subjects. She graduated with a bachelor of science degree in chemistry and immediately enrolled in a master’s degree program in chemical engineering at the University of Maine.

Sterling was convinced to continue her studies at the University of Maine with a dual PhD in chemical engineering and biomedical sciences. That decision required the daunting process of taking two sets of core courses and completing a qualifying exam in each field. 

“I wouldn’t recommend doing that,” she says with a laugh. “To celebrate after finishing that intense experience, I took a year off to figure out what came next.”

Sterling chose to do a postdoc in the lab of Eva Nogales, a structural biology professor at the University of California at Berkeley. Nogales was looking for a scientist with experience working with lipids, a class of molecules that Sterling had studied extensively in graduate school.

At the time Sterling joined, the Nogales Lab was at the forefront of implementing an exciting structural biology approach: cryo-EM.

“When I was interviewing, I’d never even seen the type of microscope required for cryo-EM, let alone performed any experiments,” Sterling says. “But I remember thinking ‘I’m sure I can figure this out.’”

Cryo-EM is a technique that allows researchers to determine the three-dimensional shape, or structure, of the macromolecules that make up cells. A researcher can take a sample of their macromolecule of choice, suspend it in a liquid solution, and rapidly freeze it onto a grid to capture the macromolecules in random positions — the “cryo” part of the name. Powerful electron microscopes then collect images of the macromolecule — the EM part of cryo-EM. 

The two-dimensional images of the macromolecules from different angles can be combined to produce a three-dimensional structure. Structural information like this can reveal the macromolecule’s function inside cells or inform how it differs in a disease state. The rapidly expanding use of cryo-EM has unlocked so many mechanistic insights that the researchers who developed the technology were awarded the 2017 Nobel Prize in Chemistry. 

The MIT.nano facility opened its doors in 2018. The open-access, state-of-the-art facility now has more than 160 tools and more than 1,500 users representing nearly every department at MIT. The Cryo-EM facility lives in the basement of the MIT.nano building and houses multiple electron microscopes and laboratory space for cryo-specimen preparation.

Thanks to her work at UC Berkeley, Sterling’s career trajectory has long been intertwined with the expanding use of cryo-EM in research. Sterling anticipated the need for experienced scientists to run core facilities in order to maintain the electron microscopes needed for cryo-EM, which range in cost from a staggering $1 million to $10 million each.

After completing her postdoc, Sterling worked at the Harvard University cryo-EM core facility for five years. When the director position for the MIT.nano Cryo-EM facility opened, she decided to apply.

“I like that the core facility at MIT was smaller and more frequently used by students,” Sterling says. “There’s a lot more teaching, which is a challenge sometimes, but it’s rewarding to impact someone’s career at such an early stage.”

A focus on users

When Sterling arrived at MIT, her first initiative was to meet directly with all the students in research labs that use the core facility to learn what would make using the facility a better experience. She also implemented clear and standard operating procedures for cryo-EM beginners.

“I think being consistent and available has really improved users’ experiences,” Sterling says.

The users themselves report that her initiatives have proven highly successful — and have helped them grow as scientists.

“Sterling cultivates an environment where I can freely ask questions about anything to support my learning,” says Bonnie Su, a frequent Cryo-EM facility user and graduate student from the Vos lab.

But Sterling does not want to stop there. Looking ahead, she hopes to expand the facility by acquiring an additional electron microscope to allow more users to utilize this powerful technology in their research. She also plans to build a more collaborative community of cryo-EM scientists at MIT with additional symposia and casual interactions such as coffee hours.

Under her management, cryo-EM research has flourished. In the last year, the Cryo-EM core facility has supported research resulting in 12 new publications across five different departments at MIT. The facility has also provided access to 16 industry and non-MIT academic entities. These studies have revealed important insights into various biological processes, from visualizing how large protein machinery reads our DNA to the protein aggregates found in neurodegenerative disorders.

If anyone wants to conduct cryo-EM experiments or learn more about the technique, Sterling encourages anyone in the MIT community to reach out.

“Come visit us!” she says. “We give lots of tours, and you can stop by to say hi anytime.”


Study assesses seizure risk from stimulating the thalamus

In animal models, even low stimulation currents can sometimes still cause electrographic seizures, researchers found.


The idea of electrically stimulating a brain region called the central thalamus has gained traction among researchers and clinicians because it can help arouse subjects from unconscious states induced by traumatic brain injury or anesthesia, and can boost cognition and performance in awake animals. But the method, called CT-DBS, can have a side effect: seizures. A new study by researchers at MIT and Massachusetts General Hospital (MGH) who were testing the method in awake mice quantifies the probability of seizures at different stimulation currents and cautions that they sometimes occurred even at low levels.

“Understanding production and prevalence of this type of seizure activity is important because brain stimulation-based therapies are becoming more widely used,” says co-senior author Emery N. Brown, Edward Hood Taplin Professor of Medical Engineering and Computational Neuroscience in The Picower Institute for Learning and Memory, the Institute for Medical Engineering and Science, the Department of Brain and Cognitive Sciences, and the Center for Brains Minds and Machines (CBMM) at MIT.

In the brain, the seizures associated with CT-DBS occur as “electrographic seizures,” which are bursts of voltage among neurons across a broad spectrum of frequencies. Behaviorally, they manifest as “absence seizures” in which the subject appears to take on a blank stare and freezes for about 10-20 seconds.

In their study, the researchers were hoping to determine a CT-DBS stimulation current — in a clinically relevant range of under 200 microamps — below which seizures could be reliably avoided.

In search of that ideal current, they developed a protocol of starting brief bouts of CT-DBS at 1 microamp and then incrementally ramping the current up to 200 microamps until they found a threshold where an electrographic seizure occurred. Once they found that threshold, then they tested a longer bout of stimulation at the next lowest current level in hopes that an electrographic seizure wouldn’t occur. They did this for a variety of different stimulation frequencies. To their surprise, electrographic seizures still occurred 2.2 percent of the time during those longer stimulation trials (i.e. 22 times out of 996 tests) and in 10 out of 12 mice. At just 20 microamps, mice still experienced seizures in three out of 244 tests, a 1.2 percent rate.

“This is something that we needed to report because this was really surprising,” says co-lead author Francisco Flores, a research affiliate in The Picower Institute and CBMM, and an instructor in anesthesiology at MGH, where Brown is also an anesthesiologist. Isabella Dalla Betta, a technical associate in The Picower Institute, co-led the study published in Brain Stimulation.

Stimulation frequency didn’t matter for seizure risk but the rate of electrographic seizures increased as the current level increased. For instance, it happened in 5 out of 190 tests at 50 microamps, and two out of 65 tests at 100 microamps. The researchers also found that when an electrographic seizure occurred, it did so more quickly at higher currents than at lower levels. Finally, they also saw that seizures happened more quickly if they stimulated the thalamus on both sides of the brain, versus just one side. Some mice exhibited behaviors similar to absence seizure, though others became hyperactive.

It is not clear why some mice experienced electrographic seizures at just 20 microamps while two mice did not experience the seizures even at 200. Flores speculated that there may be different brain states that change the predisposition to seizures amid stimulation of the thalamus. Notably, seizures are not typically observed in humans who receive CT-DBS while in a minimally conscious state after a traumatic brain injury or in animals who are under anesthesia. Flores said the next stage of the research would aim to discern what the relevant brain states may be.

In the meantime, the study authors wrote, “EEG should be closely monitored for electrographic seizures when performing CT-DBS, especially in awake subjects.”

The paper’s co-senior author is Matt Wilson, Sherman Fairchild Professor in The Picower Institute, CBMM, and the departments of Biology and Brain and Cognitive Sciences. In addition to Dalla Betta, Flores, Brown and Wilson, the study’s other authors are John Tauber, David Schreier, and Emily Stephen.

Support for the research came from The JPB Foundation, The Picower Institute for Learning and Memory; George J. Elbaum ’59, SM ’63, PhD ’67, Mimi Jensen, Diane B. Greene SM ’78, Mendel Rosenblum, Bill Swanson, annual donors to the Anesthesia Initiative Fund; and the National Institutes of Health.


Atoms on the edge

Physicists capture images of ultracold atoms flowing freely, without friction, in an exotic “edge state.”


Typically, electrons are free agents that can move through most metals in any direction. When they encounter an obstacle, the charged particles experience friction and scatter randomly like colliding billiard balls.

But in certain exotic materials, electrons can appear to flow with single-minded purpose. In these materials, electrons may become locked to the material’s edge and flow in one direction, like ants marching single-file along a blanket’s boundary. In this rare “edge state,” electrons can flow without friction, gliding effortlessly around obstacles as they stick to their perimeter-focused flow. Unlike in a superconductor, where all electrons in a material flow without resistance, the current carried by edge modes occurs only at a material’s boundary.

Now MIT physicists have directly observed edge states in a cloud of ultracold atoms. For the first time, the team has captured images of atoms flowing along a boundary without resistance, even as obstacles are placed in their path. The results, which appear today in Nature Physics, could help physicists manipulate electrons to flow without friction in materials that could enable super-efficient, lossless transmission of energy and data.

“You could imagine making little pieces of a suitable material and putting it inside future devices, so electrons could shuttle along the edges and between different parts of your circuit without any loss,” says study co-author Richard Fletcher, assistant professor of physics at MIT. “I would stress though that, for us, the beauty is seeing with your own eyes physics which is absolutely incredible but usually hidden away in materials and unable to be viewed directly.”

The study’s co-authors at MIT include graduate students Ruixiao Yao and Sungjae Chi, former graduate students Biswaroop Mukherjee PhD ’20 and Airlia Shaffer PhD ’23, along with Martin Zwierlein, the Thomas A. Frank Professor of Physics. The co-authors are all members of MIT’s Research Laboratory of Electronics and the MIT-Harvard Center for Ultracold Atoms.

Forever on the edge

Physicists first invoked the idea of edge states to explain a curious phenomenon, known today as the Quantum Hall effect, which scientists first observed in 1980, in experiments with layered materials, where electrons were confined to two dimensions. These experiments were performed in ultracold conditions, and under a magnetic field. When scientists tried to send a current through these materials, they observed that electrons did not flow straight through the material, but instead accumulated on one side, in precise quantum portions.

To try and explain this strange phenomenon, physicists came up with the idea that these Hall currents are carried by edge states. They proposed that, under a magnetic field, electrons in an applied current could be deflected to the edges of a material, where they would flow and accumulate in a way that might explain the initial observations.

“The way charge flows under a magnetic field suggests there must be edge modes,” Fletcher says. “But to actually see them is quite a special thing because these states occur over femtoseconds, and across fractions of a nanometer, which is incredibly difficult to capture.”

Rather than try and catch electrons in an edge state, Fletcher and his colleagues realized they might be able to recreate the same physics in a larger and more observable system. The team has been studying the behavior of ultracold atoms in a carefully designed setup that mimics the physics of electrons under a magnetic field.

“In our setup, the same physics occurs in atoms, but over milliseconds and microns,” Zwierlein explains. “That means that we can take images and watch the atoms crawl essentially forever along the edge of the system.”

A spinning world

In their new study, the team worked with a cloud of about 1 million sodium atoms, which they corralled in a laser-controlled trap, and cooled to nanokelvin temperatures. They then manipulated the trap to spin the atoms around, much like riders on an amusement park Gravitron.

“The trap is trying to pull the atoms inward, but there’s centrifugal force that tries to pull them outward,” Fletcher explains. “The two forces balance each other, so if you’re an atom, you think you’re living in a flat space, even though your world is spinning. There’s also a third force, the Coriolis effect, such that if they try to move in a line, they get deflected. So these massive atoms now behave as if they were electrons living in a magnetic field.”

Into this manufactured reality, the researchers then introduced an “edge,” in the form of a ring of laser light, which formed a circular wall around the spinning atoms. As the team took images of the system, they observed that when the atoms encountered the ring of light, they flowed along its edge, in just one direction.

“You can imagine these are like marbles that you’ve spun up really fast in a bowl, and they just keep going around and around the rim of the bowl,” Zwierlein offers. “There is no friction. There is no slowing down, and no atoms leaking or scattering into the rest of the system. There is just beautiful, coherent flow.”

“These atoms are flowing, free of friction, for hundreds of microns,” Fletcher adds. “To flow that long, without any scattering, is a type of physics you don’t normally see in ultracold atom systems.”

This effortless flow held up even when the researchers placed an obstacle in the atoms’ path, like a speed bump, in the form of a point of light, which they shone along the edge of the original laser ring. Even as they came upon this new obstacle, the atoms didn’t slow their flow or scatter away, but instead glided right past without feeling friction as they normally would.

“We intentionally send in this big, repulsive green blob, and the atoms should bounce off it,” Fletcher says. “But instead what you see is that they magically find their way around it, go back to the wall, and continue on their merry way.”

The team’s observations in atoms document the same behavior that has been predicted to occur in electrons. Their results show that the setup of atoms is a reliable stand-in for studying how electrons would behave in edge states.

“It’s a very clean realization of a very beautiful piece of physics, and we can directly demonstrate the importance and reality of this edge,” Fletcher says. “A natural direction is to now introduce more obstacles and interactions into the system, where things become more unclear as to what to expect.”

This research was supported, in part, by the National Science Foundation.


3 Questions: Evidence for planetary formation through gravitational instability

Assistant Professor Richard Teague describes how movement of unstable gas in a protoplanetary disk lends credibility to a secondary theory of planetary formation.


Exoplanets form in protoplanetary disks, a collection of space dust and gas orbiting a star. The leading theory of planetary formation, called core accretion, occurs when grains of dust in the disk collect and grow to form a planetary core, like a snowball rolling downhill. Once it has a strong enough gravitational pull, other material collapses around it to form the atmosphere.

A secondary theory of planetary formation is gravitational collapse. In this scenario, the disk itself becomes gravitationally unstable and collapses to form the planet, like snow being plowed into a pile. This process requires the disk to be massive, and until recently there were no known viable candidates to observe; previous research had detected the snow pile, but not what made it.

But in a new paper published today in Nature, MIT Kerr-McGee Career Development Professor Richard Teague and his colleagues report evidence that the movement of the gas surrounding the star AB Aurigae behaves as one would expect in a gravitationally unstable disk, matching numerical predictions. Their finding is akin to detecting the snowplow that made the pile. This indicates that gravitational collapse is a viable method of planetary formation. Here, Teague, who studies the formation of planetary systems in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS), answers a few questions about the new work.

Q: What made the AB Aurigae system a good candidate for observation?

A: There have been plenty of observations that have suggested some interesting dynamics going on the system. Groups have seen spiral arms within the disk; people have found hot spots, which some groups have interpreted as a planet; others have explained as some other instability. But it was really a disk that we knew there was lots of interesting motions going on. The data that we had previously was enough to see that it was interesting, but not really good enough to detail what was going on.

Q: What is gravitational instability when it comes to protoplanetary disks?

A: Gravitational instabilities are where the gravity from the disk itself is strong enough to perturb motions within the disk. Usually, we assume that the gravitational potential is dominated by the central star, which is the case when the mass of the disk is less than 10 percent of the stellar mass (which is most of the time). When the disk mass gets too large, gravitational potential will affect it in different ways and drive these very large spiral arms in the disk. These can have lots of different effects: They can trap the gas, they can heat it up, they can allow for angular momentum to be transported very rapidly within the disk. If its unstable, the disk can fragment and collapse directly to form a planet in an incredibly short period of time. Rather than the tens of thousands of years that it would take for a core accretion to happen, this would happen at a fraction of that time.

Q: How does this discovery challenge conventional wisdom around planetary formation?

A: It shows that this alternative path of forming planets via direct collapse is a way that we can form planets. This is particularly important because we’re finding more and more evidence of very large planets — say, Jupiter mass or larger — that are sitting very far away from their star. Those sorts of planets are incredibly hard to form with core accretion, because you typically need them close to the star where things happen quickly. So to form something so massive, so far away from the star is a real challenge. If we're able to show that there are sources that are massive enough that they're gravitationally unstable, this solves that problem. It's a way that perhaps newer systems can be formed, because they've always been a bit of a challenge to understand how they came about with core accretion.


MIT chemists explain why dinosaur collagen may have survived for millions of years

The researchers identified an atomic-level interaction that prevents peptide bonds from being broken down by water.


Collagen, a protein found in bones and connective tissue, has been found in dinosaur fossils as old as 195 million years. That far exceeds the normal half-life of the peptide bonds that hold proteins together, which is about 500 years.

A new study from MIT offers an explanation for how collagen can survive for so much longer than expected. The research team found that a special atomic-level interaction defends collagen from attack by water molecules. This barricade prevents water from breaking the peptide bonds through a process called hydrolysis.

“We provide evidence that that interaction prevents water from attacking the peptide bonds and cleaving them. That just flies in the face of what happens with a normal peptide bond, which has a half-life of only 500 years,” says Ron Raines, the Firmenich Professor of Chemistry at MIT.

Raines is the senior author of the new study, which appears today in ACS Central Science. MIT postdoc Jinyi Yang PhD ’24 is the lead author of the paper. MIT postdoc Volga Kojasoy and graduate student Gerard Porter are also authors of the study.

Water-resistant

Collagen is the most abundant protein in animals, and it is found in not only bones but also skin, muscles, and ligaments. It’s made from long strands of protein that intertwine to form a tough triple helix.

“Collagen is the scaffold that holds us together,” Raines says. “What makes the collagen protein so stable, and such a good choice for this scaffold, is that unlike most proteins, it’s fibrous.”

In the past decade, paleobiologists have found evidence of collagen preserved in dinosaur fossils, including an 80-million-year-old Tyrannosaurus rex fossil, and a sauropodomorph fossil that is nearly 200 million years old.

Over the past 25 years, Raines’ lab has been studying collagen and how its structure enables its function. In the new study, they revealed why the peptide bonds that hold collagen together are so resistant to being broken down by water.

Peptide bonds are formed between a carbon atom from one amino acid and a nitrogen atom of the adjacent amino acid. The carbon atom also forms a double bond with an oxygen atom, forming a molecular structure called a carbonyl group. This carbonyl oxygen has a pair of electrons that don’t form bonds with any other atoms. Those electrons, the researchers found, can be shared with the carbonyl group of a neighboring peptide bond.

Because this pair of electrons is being inserted into those peptide bonds, water molecules can’t also get into the structure to disrupt the bond.

To demonstrate this, Raines and his colleagues created two interconverting mimics of collagen — the one that usually forms a triple helix, which is known as trans, and another in which the angles of the peptide bonds are rotated into a different form, known as cis. They found that the trans form of collagen did not allow water to attack and hydrolyze the bond. In the cis form, water got in and the bonds were broken.

“A peptide bond is either cis or trans, and we can change the cis to trans ratio. By doing that, we can mimic the natural state of collagen or create an unprotected peptide bond. And we saw that when it was unprotected, it was not long for the world,” Raines says.

“This work builds on a long-term effort in the Raines Group to classify the role of a long-overlooked fundamental interaction in protein structure,” says Paramjit Arora, a professor of chemistry at New York University, who was not involved in the research. “The paper directly addresses the remarkable finding of intact collagen in the ribs of a 195-million-old dinosaur fossil, and shows that overlap of filled and empty orbitals controls the conformational and hydrolytic stability of collagen.”

“No weak link”

This sharing of electrons has also been seen in protein structures known as alpha helices, which are found in many proteins. These helices may also be protected from water, but the helices are always connected by protein sequences that are more exposed, which are still susceptible to hydrolysis.

“Collagen is all triple helices, from one end to the other,” Raines says. “There’s no weak link, and that’s why I think it has survived.”

Previously, some scientists have suggested other explanations for why collagen might be preserved for millions of years, including the possibility that the bones were so dehydrated that no water could reach the peptide bonds.

“I can’t discount the contributions from other factors, but 200 million years is a long time, and I think you need something at the molecular level, at the atomic level in order to explain it,” Raines says.

The research was funded by the National Institutes of Health and the National Science Foundation.


Engineering proteins to treat cancer

PhD student Oscar Molina seeks new ways to assemble proteins into targeted cancer therapies, while also encouraging his fellow first-generation graduate students.


Like many children of first-generation immigrants, Oscar Molina grew up feeling like he had two career choices: doctor or lawyer. He seemed destined for the former as he excelled in high school and planned to major in biochemistry at the University of California at Los Angeles, but as an undergraduate, he fell in love with research.

“I was fascinated by discovery. As I did it more in college, I realized I didn’t want to be a doctor,” he says. “Once I saw that I could make an impact and be at the forefront of therapy with biotech, I knew I wanted to do that.”

If the next couple of years go as planned, his parents will indeed see their son become a doctor — just not exactly the way they might have guessed. He’s entering the fifth year of his PhD program in biology at MIT and is currently working in the lab of Professor Ronald Raines, researching the potential of proteins to kill cancer cells.

Molina, who is the first in his family to attend college, also works to support his fellow students through outreach and community-building efforts. In various roles, including as a Graduate Community Fellow in MIT’s Office of Graduate Education, he sought to connect and encourage students from underrepresented backgrounds as they pursued their own graduate studies.

“I had a lot of opportunities presented to me that made me ask, ‘Why me?’” he says. “I recognize that they were super valuable, and that’s why I should deliver that back to other people.”

Unlocking protein construction chemically

The spirit of giving back isn’t just limited to Molina’s work outside of the lab. He chose chemical biology and the pursuit of new cancer therapies as his research focus partly because his grandfather has been dealing with the disease for the last 10 years. The ultimate goal guiding his research is to make all protein-based cancer therapies more effective.

He and other collaborators in the Raines Lab published a paper in June that takes an important step in that direction, suggesting a way to make fusion proteins with greater customization and improved performance. They discovered that a chemical called 3-bromo-5-methylene pyrrolone can be used to combine three proteins efficiently and with high levels of control and modularity, a significant advance given most of the techniques for protein conjugation are only able to combine two at a time in a single spot.

“Now, we can have chemical control of where we include different things, where we can kind of plug-and-play,” he says.

Researchers can now adjust multiple characteristics at the same time — for example, increasing the protein’s half-life or improving its ability to target cancer cells — while still achieving a homogenous end product. They’re also relevant to immune cell redirection therapies, which require multimeric protein chimeras to activate immune clearance of cancer cells.

“That’s the most interesting thing to me,” he says. “How do we give a biologic therapy the best opportunity to be active and efficacious?”

His upcoming thesis will center around that question as it relates to chemotherapies based on ribonuclease 1, an enzyme that is best-known for cleaving RNA.

Paying it back and paying it forward

While that thesis will likely demand more of Molina than any other project he’s worked on in the past, he’s no stranger to hard work. After his mother and father left their respective homes of Guatemala and El Salvador in the 1990s, they dedicated their lives to giving their children futures that they themselves didn’t have access to.

Witnessing their efforts impressed two beliefs into Molina’s worldview: the value of education and the importance of support. Among his family, he is the first to graduate from a U.S. high school, the first to attend a four-year college, and the first to attend graduate school. These “firsts” can weigh heavily, and as he began his studies at MIT, he knew how difficult it can be to carry that burden alone.

“I saw the need and wanted to help other people be the first in their family to do things like go to college,” he says. “I also wanted to help people with similar backgrounds to mine, like being an underrepresented minority or a first-generation college student.”

That desire led Molina to join MIT’s Office of Graduate Education as a Graduate Community Fellow in January 2022, where he worked on supporting various affinity groups across the Institute. This included helping groups out with logistics, funding applications, community outreach and cross-group collaborations. He also spent part of last summer as a pod leader for the MIT Summer Research Program, which works to prepare underrepresented students for graduate education and research.

He’s also leveraged his personal interests to volunteer with various community organizations in Cambridge and Boston. Despite his numerous commitments, he’s an avid marathon runner, and ran the 2022 Boston Marathon while raising nearly $8000 for Boston Scores, a program that provides educational and athletic opportunities for students in the Boston Public Schools system.

After graduation, Molina plans on joining a startup in Boston’s biotech scene while learning more about the venture capital firms that fund their research. Wherever he ends up, he plans on continuing to apply the core truths that brought him where he is now.

“I want to be at the forefront of creating therapies. I really like science. I really like helping others. I really like the ability to create things that are impactful,” he says. “Now it’s time to take that and find my way to what’s next.”


Nurturing success

Professors Mariya Grinberg and Nuh Gedik are honored as “Committed to Caring.”


The start and finish of a degree program are pivotal moments in the lives of MIT's graduate students. In her first three years in MIT’s Department of Political Science, professor Mariya Grinberg’s mentorship has helped numerous students start their graduate journeys with confidence and direction. Nuh Gedik, who joined the Department of Physics in 2008, looks to the finish line: he finds joy in seeing his students reach personal and professional success at the end of their PhDs. Both were recently honored as “Committed to Caring” for their support of graduate students. 

Mariya Grinberg: Commitment to intellectual growth

When Mariya Grinberg joined the MIT Security Studies Program as a faculty member in 2021, the department was in a state of flux. The Covid-19 pandemic was in full swing, several core faculty members were nearing retirement, and the program had welcomed the largest cohort of PhD students in its history. As Grinberg entered the community, she embraced these challenges, meeting and exceeding her expected duties as an advisor.

In her role as assistant professor of political science, Grinberg’s research interests center on the question of how time and uncertainty shape the strategic decisions of states, focusing on economic statecraft, military planning, and questions of state sovereignty.

As a junior faculty member, Grinberg shoulders one of the largest advising loads in the department. Despite this, multiple nominators praised Grinberg for her prompt and discerning feedback. Students note her efforts in reading through and commenting on many rounds of paper drafts, supplemented by hour-long brainstorming sessions at her whiteboard. “It's rare that someone can become both your most incisive critic and staunchest advocate,” a nominator noted. “I never took it for granted.”

Throughout these sessions, Grinberg delivers her advice with both confidence and empathy. One nominator shared how meetings put them at ease: “Normally, I am quite anxious about meeting with faculty, but I never felt that way during my meetings with Mariya.”

Grinberg believes that failure is an integral part of the learning process and encourages her students to embrace and learn from setbacks. She acknowledges that the pressure to accomplish tasks within time constraints often leaves little room for failure, which can lead to decision paralysis. Grinberg reassures her students that investing time in a dissertation idea, even if it turns out to be non-viable, is not time wasted.

When asked about her philosophy on mentorship, Grinberg emphasizes that the advice of mentors is just that — advice. It represents their best effort to steer students in what they perceive to be a fruitful direction, but it does not mean the advice is invariably correct. Grinberg encourages students to critically evaluate any feedback and make their own judgments that may not align with their advisor's thoughts.

Grinberg shares a concept she first learned from a creative writing professor: “When someone tells you there is something wrong with your work, 90 percent of the time they are right. When someone tells you how to fix it, 90 percent of the time they are wrong.”

Nuh Gedik: Mentoring the next generation of scientists

Gedik is the Donner Professor of Physics at MIT. His group investigates quantum materials by using advanced optical and electron-based spectroscopies. Gedik employs these techniques to study topological insulators, high-temperature superconductors, and atomically layered materials.

When asked about what keeps him motivated, Gedik says that he is driven by the professional development of his students. Gedik prioritizes the growth of his students above all else, and believes that academic output follows naturally with personal and professional growth. One nominator shared one of Gedik’s favorite sayings: “Finding a job for you is my job.”

As a result of this mindset, the alumni of Gedik’s group have achieved spectacular professional success, including members who are now faculty at top universities such as Stanford, Harvard, and Columbia universities. Several group members are also in leadership roles at companies like Intel, Meta, or ASML.

Alongside his academic pursuits, Gedik is deeply committed to promoting diversity, equity, and inclusion within his research group and the broader academic community. He dedicates regular portions of the weekly group meetings to discussing literature and practices related to these topics. Not only do these discussions educate the group on important issues, but they also help lab members integrate inclusive practices into their day-to-day endeavors.

By integrating inclusive principles into his teaching and mentoring, Gedik creates a culture where students are supported personally and academically. In fact, a nominator shared that many of these practices stem from the professional development courses that Gedik voluntarily attends. His proactive approach not only benefits his current students, but also sets a standard that influences others as well.

In addition to his efforts within the lab, Gedik is proactive in scientific outreach and mentorship within the broader community. He attends annual science fairs in educationally under-resourced communities, aiming to inspire the younger generation to pursue careers in STEM. One nominator praises these fairs for “igniting interest in science and technology among diverse audiences,” with a particular focus on inspiring the younger generation.


Scientists find neurons that process language on different timescales

In language-processing areas of the brain, some cell populations respond to one word, while others respond to strings of words.


Using functional magnetic resonance imaging (fMRI), neuroscientists have identified several regions of the brain that are responsible for processing language. However, discovering the specific functions of neurons in those regions has proven difficult because fMRI, which measures changes in blood flow, doesn’t have high enough resolution to reveal what small populations of neurons are doing.

Now, using a more precise technique that involves recording electrical activity directly from the brain, MIT neuroscientists have identified different clusters of neurons that appear to process different amounts of linguistic context. These “temporal windows” range from just one word up to about six words.

The temporal windows may reflect different functions for each population, the researchers say. Populations with shorter windows may analyze the meanings of individual words, while those with longer windows may interpret more complex meanings created when words are strung together.

“This is the first time we see clear heterogeneity within the language network,” says Evelina Fedorenko, an associate professor of neuroscience at MIT. “Across dozens of fMRI experiments, these brain areas all seem to do the same thing, but it’s a large, distributed network, so there’s got to be some structure there. This is the first clear demonstration that there is structure, but the different neural populations are spatially interleaved so we can’t see these distinctions with fMRI.”

Fedorenko, who is also a member of MIT’s McGovern Institute for Brain Research, is the senior author of the study, which appears today in Nature Human Behavior. MIT postdoc Tamar Regev and Harvard University graduate student Colton Casto are the lead authors of the paper.

Temporal windows

Functional MRI, which has helped scientists learn a great deal about the roles of different parts of the brain, works by measuring changes in blood flow in the brain. These measurements act as a proxy of neural activity during a particular task. However, each “voxel,” or three-dimensional chunk, of an fMRI image represents hundreds of thousands to millions of neurons and sums up activity across about two seconds, so it can’t reveal fine-grained detail about what those neurons are doing.

One way to get more detailed information about neural function is to record electrical activity using electrodes implanted in the brain. These data are hard to come by because this procedure is done only in patients who are already undergoing surgery for a neurological condition such as severe epilepsy.

“It can take a few years to get enough data for a task because these patients are relatively rare, and in a given patient electrodes are implanted in idiosyncratic locations based on clinical needs, so it takes a while to assemble a dataset with sufficient coverage of some target part of the cortex. But these data, of course, are the best kind of data we can get from human brains: You know exactly where you are spatially and you have very fine-grained temporal information,” Fedorenko says.

In a 2016 study, Fedorenko reported using this approach to study the language processing regions of six people. Electrical activity was recorded while the participants read four different types of language stimuli: complete sentences, lists of words, lists of non-words, and “jabberwocky” sentences — sentences that have grammatical structure but are made of nonsense words.

Those data showed that in some neural populations in language processing regions, activity would gradually build up over a period of several words, when the participants were reading sentences. However, this did not happen when they read lists of words, lists of nonwords, of Jabberwocky sentences.

In the new study, Regev and Casto went back to those data and analyzed the temporal response profiles in greater detail. In their original dataset, they had recordings of electrical activity from 177 language-responsive electrodes across the six patients. Conservative estimates suggest that each electrode represents an average of activity from about 200,000 neurons. They also obtained new data from a second set of 16 patients, which included recordings from another 362 language-responsive electrodes.

When the researchers analyzed these data, they found that in some of the neural populations, activity would fluctuate up and down with each word. In others, however, activity would build up over multiple words before falling again, and yet others would show a steady buildup of neural activity over longer spans of words.

By comparing their data with predictions made by a computational model that the researchers designed to process stimuli with different temporal windows, the researchers found that neural populations from language processing areas could be divided into three clusters. These clusters represent temporal windows of either one, four, or six words.

“It really looks like these neural populations integrate information across different timescales along the sentence,” Regev says.

Processing words and meaning

These differences in temporal window size would have been impossible to see using fMRI, the researchers say.

“At the resolution of fMRI, we don’t see much heterogeneity within language-responsive regions. If you localize in individual participants the voxels in their brain that are most responsive to language, you find that their responses to sentences, word lists, jabberwocky sentences and non-word lists are highly similar,” Casto says.

The researchers were also able to determine the anatomical locations where these clusters were found. Neural populations with the shortest temporal window were found predominantly in the posterior temporal lobe, though some were also found in the frontal or anterior temporal lobes. Neural populations from the two other clusters, with longer temporal windows, were spread more evenly throughout the temporal and frontal lobes.

Fedorenko’s lab now plans to study whether these timescales correspond to different functions. One possibility is that the shortest timescale populations may be processing the meanings of a single word, while those with longer timescales interpret the meanings represented by multiple words.

“We already know that in the language network, there is sensitivity to how words go together and to the meanings of individual words,” Regev says. “So that could potentially map to what we’re finding, where the longest timescale is sensitive to things like syntax or relationships between words, and maybe the shortest timescale is more sensitive to features of single words or parts of them.”

The research was funded by the Zuckerman-CHE STEM Leadership Program, the Poitras Center for Psychiatric Disorders Research, the Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University, the U.S. National Institutes of Health, an American Epilepsy Society Research and Training Fellowship, the McDonnell Center for Systems Neuroscience, Fondazione Neurone, the McGovern Institute, MIT’s Department of Brain and Cognitive Sciences, and the Simons Center for the Social Brain.


Pursuing the secrets of a stealthy parasite

By unraveling the genetic pathways that help Toxoplasma gondii persist in human cells, Sebastian Lourido hopes to find new ways to treat toxoplasmosis.


Toxoplasma gondii, the parasite that causes toxoplasmosis, is believed to infect as much as one-third of the world’s population. Many of those people have no symptoms, but the parasite can remain dormant for years and later reawaken to cause disease in anyone who becomes immunocompromised.

Why this single-celled parasite is so widespread, and what triggers it to reemerge, are questions that intrigue Sebastian Lourido, an associate professor of biology at MIT and member of the Whitehead Institute for Biomedical Research. In his lab, research is unraveling the genetic pathways that help to keep the parasite in a dormant state, and the factors that lead it to burst free from that state.

“One of the missions of my lab to improve our ability to manipulate the parasite genome, and to do that at a scale that allows us to ask questions about the functions of many genes, or even the entire genome, in a variety of contexts,” Lourido says.

There are drugs that can treat the acute symptoms of Toxoplasma infection, which include headache, fever, and inflammation of the heart and lungs. However, once the parasite enters the dormant stage, those drugs don’t affect it. Lourido hopes that his lab’s work will lead to potential new treatments for this stage, as well as drugs that could combat similar parasites such as a tickborne parasite known as Babesia, which is becoming more common in New England.

“There are a lot of people who are affected by these parasites, and parasitology often doesn’t get the attention that it deserves at the highest levels of research. It’s really important to bring the latest scientific advances, the latest tools, and the latest concepts to the field of parasitology,” Lourido says.

A fascination with microbiology

As a child in Cali, Colombia, Lourido was enthralled by what he could see through the microscopes at his mother’s medical genetics lab at the University of Valle del Cauca. His father ran the family’s farm and also worked in government, at one point serving as interim governor of the state.

“From my mom, I was exposed to the ideas of gene expression and the influence of genetics on biology, and I think that really sparked an early interest in understanding biology at a fundamental level,” Lourido says. “On the other hand, my dad was in agriculture, and so there were other influences there around how the environment shapes biology.”

Lourido decided to go to college in the United States, in part because at the time, in the early 2000s, Colombia was experiencing a surge in violence. He was also drawn to the idea of attending a liberal arts college, where he could study both science and art. He ended up going to Tulane University, where he double-majored in fine arts and cell and molecular biology.

As an artist, Lourido focused on printmaking and painting. One area he especially enjoyed was stone lithography, which involves etching images on large blocks of limestone with oil-based inks, treating the images with chemicals, and then transferring the images onto paper using a large press.

“I ended up doing a lot of printmaking, which I think attracted me because it felt like a mode of expression that leveraged different techniques and technical elements,” he says.

At the same time, he worked in a biology lab that studied Daphnia, tiny crustaceans found in fresh water that have helped scientists learn about how organisms can develop new traits in response to changes to their environment. As an undergraduate, he helped develop ways to use viruses to introduce new genes into Daphnia. By the time he graduated from Tulane, Lourido had decided to go into science rather than art.

“I had really fallen in love with lab science as an undergrad. I loved the freedom and the creativity that came from it, the ability to work in teams and to build on ideas, to not have to completely reinvent the entire system, but really be able to develop it over a longer period of time,” he says.

After graduating from college, Lourido spent two years in Germany, working at the Max Planck Institute for Infection Biology. In Arturo Zychlinksy’s lab, Lourido studied two bacteria known as Shigella and Salmonella, which can cause severe illnesses, including diarrhea. His studies there helped to reveal how these bacteria get into cells and how they modify the host cells’ own pathways to help them replicate inside cells.

As a graduate student at Washington University in St. Louis, Lourido worked in several labs focusing on different aspects of microbiology, including virology and bacteriology, but eventually ended up working with David Sibley, a prominent researcher specializing in Toxoplasma.

“I had not thought much about Toxoplasma before going to graduate school,” Lourido recalls. “I was pretty unaware of parasitology in general, despite some undergrad courses, which honestly very superficially treated the subject. What I liked about it was here was a system where we knew so little — organisms that are so different from the textbook models of eukaryotic cells.”

Toxoplasma gondii belongs to a group of parasites known as apicomplexans — a type of protozoans that can cause a variety of diseases. After infecting a human host, Toxoplasma gondii can hide from the immune system for decades, usually in cysts found in the brain or muscles. Lourido found the organism especially intriguing because as a 17-year-old, he had been diagnosed with toxoplasmosis. His only symptom was swollen glands, but doctors found that his blood contained antibodies against Toxoplasma.

“It is really fascinating that in all of these people, about a quarter to a third of the world’s population, the parasite persists. Chances are I still have live parasites somewhere in my body, and if I became immunocompromised, it would become a big problem. They would start replicating in an uncontrolled fashion,” he says.

A transformative approach

One of the challenges in studying Toxoplasma is that the organism’s genetics are very different from those of either bacteria or other eukaryotes such as yeast and mammals. That makes it harder to study parasitic gene functions by mutating or knocking out the genes.

Because of that difficulty, it took Lourido his entire graduate career to study the functions of just a couple of Toxoplasma genes. After finishing his PhD, he started his own lab as a fellow at the Whitehead Institute and began working on ways to study the Toxoplasma genome at a larger scale, using the CRISPR genome-editing technique.

With CRISPR, scientists can systematically knock out every gene in the genome and then study how each missing gene affects parasite function and survival.

“Through the adaptation of CRISPR to Toxoplasma, we’ve been able to survey the entire parasite genome. That has been transformative,” says Lourido, who became a Whitehead member and MIT faculty member in 2017. “Since its original application in 2016, we’ve been able to uncover mechanisms of drug resistance and susceptibility, trace metabolic pathways, and explore many other aspects of parasite biology.”

Using CRISPR-based screens, Lourido’s lab has identified a regulatory gene called BFD1 that appears to drive the expression of genes that the parasite needs for long-term survival within a host. His lab has also revealed many of the molecular steps required for the parasite to shift between active and dormant states.

“We’re actively working to understand how environmental inputs end up guiding the parasite in one direction or another,” Lourido says. “They seem to preferentially go into those chronic stages in certain cells like neurons or muscle cells, and they proliferate more exuberantly in the acute phase when nutrient conditions are appropriate or when there are low levels of immunity in the host.”


Uphill battles: Across the country in 75 days

Amulya Aluru ’23, MEng ’24 and the MIT Spokes have spent the summer spreading science, over 3,000 miles on two wheels.


Amulya Aluru ’23, MEng ’24, will head to the University of California at Berkeley for a PhD in molecular and cell biology PhD this fall. Aluru knows her undergraduate 6-7 major and MEng program, where she worked on a computational project in a biology lab, have prepared her for the next step of her academic journey.

“I’m a lot more comfortable with the unknown in terms of research — and also life,” she says. “While I’ve enjoyed what I’ve done so far, I think it’s equally valuable to try and explore new topics. I feel like there’s still a lot more for me to learn in biology.”

Unlike many of her peers, however, Aluru won’t reach the San Francisco Bay Area by car, plane, or train. She will arrive by bike — a journey she began in Washington just a few days after receiving her master’s degree.

Showing that science is accessible

Spokes is an MIT-based nonprofit that each year sends students on a transcontinental bike ride. Aluru worked for months with seven fellow MIT students on logistics and planning. Since setting out, the team has bonded over their love of memes and cycling-themed nicknames: Hank “Handlebar Hank” Stennes, Clelia “Climbing Cleo” Lacarriere, Varsha “Vroom Vroom Varsha” Sandadi, Rebecca “Railtrail Rebecca” Lizarde, JD “JDerailleur Hanger” Hagood, Sophia “Speedy Sophia” Wang, Amulya “Aero Amulya” Aluru, and Jessica “Joyride Jess” Xu. The support minivan, carrying food, luggage, and occasionally injured or sick cyclists, even earned its own nickname: “Chrissy”, short for Chrysler Pacifica.

“I really wanted to do something to challenge myself, but not in a strictly academic sense,” Aluru says of her decision to join the team and bike more than 3,000 miles this summer.

The Spokes team is not biking across the country solely to accomplish such a feat. Throughout their journey, they’ll be offering a variety of science demonstrations, including making concrete with Rice Krispies, demonstrating the physics of sound, using 3D printers, and, in Aluru’s case, extracting DNA from strawberries.   

“We’re going to be in a lot of really different learning environments,” she says. “I hope to demonstrate that science can be accessible, even if you don’t have a lab at your disposal.”

These demonstrations have been held in venues such as a D.C. jaila space camp, and libraries and youth centers across the country; their learning festivals were even featured on a local news channel in Kentucky.

Some derailments

The team was beset with challenges from the first day they started their journey. Aluru’s first day on the road involved driving to every bike shop and REI store in the D.C. metro area to purchase bike computers for navigation because the ones the team had already purchased would only display maps of Europe.

Four days in and four Chrysler Pacificas later — the first was unsafe due to bald tires, the second made a weird sound as they pulled out of the rental lot, and the third’s gas pedal stopped working over 50 miles away from the nearest rental agency — the team was back together again in Waynesboro, Virginia, for the first time since they’d set out.

Since then, they’ve had run-ins with local fauna — including mean dogs and a meaner turtle — attempted to repair a tubeless bike that was not, in fact, tubeless, and slept in Chrissy the minivan after their tents got soaked and blew away.

Although it hasn’t all been smooth riding, the team has made time for fun. They’ve perfected the art of eating a Clif bar while on two wheelsplayed around on monkey bars in Colorado, met up with Stanford Spokes, enjoyed pounds of ice cream, and downed gallons of lattes.

The team prioritized routes on bike trails, rather than highways, as much as possible. Their teaching activities are scheduled between visits to National Parks like Tahoe, Zion, Bryce Canyon, Arches, and touring and hiking places like Breaks Interstate ParkMammoth Cave, and the Collegiate Peaks.

Aluru says she’s excited to see parts of the country she’s never visited before, and experience the terrain under her own power — except for breaks when it’s her turn to drive Chrissy.

Rolling with the ups and downs

Aluru was only a few weeks into her first Undergraduate Research Opportunities Program project in the late professor Angelika Amon’s lab when the Covid-19 pandemic hit, quickly transforming her wet lab project into a computational one. David Waterman, her postdoc mentor in the Amon Lab, was trained as a biologist, not a computational scientist. Luckily, Aluru had just taken two computer science classes.

“I was able to have a big hand in formulating my project and bouncing ideas off of him,” she recalls. “That helped me think about scientific questions, which I was able to apply when I came back to campus and started doing wet lab research again.”

When Aluru returned to campus, she began work in the Page Lab at the Whitehead Institute for Biomedical Research. She continued working there for the rest of her time at MIT, first as an undergraduate student and then as an MEng student.

The Page Lab’s work primarily concerns sex differences and how those differences play out in genetics, development, and disease — and the Department of Electronic Engineering and Computer Science, which oversees the MEng program, allows students to pursue computational projects across disciplines, no matter the department.

For her MEng work, Aluru looked at sex differences in human height, a continuation of a paper that the Page Lab published in 2019. Height is an easily observable human trait and, from previous research, is known to be sex-biased across at least five species. Genes that have sex-biased expression patterns, or expression patterns that are higher or lower in males compared to females, may play a role in establishing or maintaining these sex differences. Through statistical genetics, Aluru replicated the findings of the earlier paper and expanded them using newly published datasets.

“Amulya has had an amazing journey in our department,” says David Page, professor of biology and core member of the Whitehead Institute. “There is simply no stopping her insatiable curiosity and zest for life.”

Working with the lab as a graduate student came with more day-to-day responsibility and independence than when she was an undergrad.

“It was a shift I quite appreciated,” Aluru says. “At times it was challenging, but I think it was a good challenge: learning how to structure my research on my own, while still getting a lot of support from lab members and my PI [principal investigator].”

Gearing up for the future

Since departing MIT, Aluru and the rest of the Spokes team have spent their nights camping, sleeping in churches, and staying with hosts. They enjoyed the longest day of the year in a surprisingly “Brooklyn chic” house, spent a lazy afternoon on a river, and pinky-promised to be in each other’s weddings. The team has also been hosted by, met up with, and run into MIT alums as they’ve crossed the country.

As Aluru looks to the future, she admits she’s not exactly sure what she’ll study — but when she reaches the West Coast, she knows she’s not leaving what she’s built through MIT far behind.

“There’s going to be a small MIT community even there — a lot of my friends are in San Francisco, and a few people I know are also going to be at Berkeley,” she says. “I have formed a community at MIT that I know will support me in all my future endeavors.”


Study reveals the benefits and downside of fasting

Fasting helps intestinal stem cells regenerate and heal injuries but also leads to a higher risk of cancer in mice, MIT researchers report.


Low-calorie diets and intermittent fasting have been shown to have numerous health benefits: They can delay the onset of some age-related diseases and lengthen lifespan, not only in humans but many other organisms.

Many complex mechanisms underlie this phenomenon. Previous work from MIT has shown that one way fasting exerts its beneficial effects is by boosting the regenerative abilities of intestinal stem cells, which helps the intestine recover from injuries or inflammation.

In a study of mice, MIT researchers have now identified the pathway that enables this enhanced regeneration, which is activated once the mice begin “refeeding” after the fast. They also found a downside to this regeneration: When cancerous mutations occurred during the regenerative period, the mice were more likely to develop early-stage intestinal tumors.

“Having more stem cell activity is good for regeneration, but too much of a good thing over time can have less favorable consequences,” says Omer Yilmaz, an MIT associate professor of biology, a member of MIT’s Koch Institute for Integrative Cancer Research, and the senior author of the new study.

Yilmaz adds that further studies are needed before forming any conclusion as to whether fasting has a similar effect in humans.

“We still have a lot to learn, but it is interesting that being in either the state of fasting or refeeding when exposure to mutagen occurs can have a profound impact on the likelihood of developing a cancer in these well-defined mouse models,” he says.

MIT postdocs Shinya Imada and Saleh Khawaled are the lead authors of the paper, which appears today in Nature.

Driving regeneration

For several years, Yilmaz’s lab has been investigating how fasting and low-calorie diets affect intestinal health. In a 2018 study, his team reported that during a fast, intestinal stem cells begin to use lipids as an energy source, instead of carbohydrates. They also showed that fasting led to a significant boost in stem cells’ regenerative ability.

However, unanswered questions remained: How does fasting trigger this boost in regenerative ability, and when does the regeneration begin?

“Since that paper, we’ve really been focused on understanding what is it about fasting that drives regeneration,” Yilmaz says. “Is it fasting itself that’s driving regeneration, or eating after the fast?”

In their new study, the researchers found that stem cell regeneration is suppressed during fasting but then surges during the refeeding period. The researchers followed three groups of mice — one that fasted for 24 hours, another one that fasted for 24 hours and then was allowed to eat whatever they wanted during a 24-hour refeeding period, and a control group that ate whatever they wanted throughout the experiment.

The researchers analyzed intestinal stem cells’ ability to proliferate at different time points and found that the stem cells showed the highest levels of proliferation at the end of the 24-hour refeeding period. These cells were also more proliferative than intestinal stem cells from mice that had not fasted at all.

“We think that fasting and refeeding represent two distinct states,” Imada says. “In the fasted state, the ability of cells to use lipids and fatty acids as an energy source enables them to survive when nutrients are low. And then it’s the postfast refeeding state that really drives the regeneration. When nutrients become available, these stem cells and progenitor cells activate programs that enable them to build cellular mass and repopulate the intestinal lining.”

Further studies revealed that these cells activate a cellular signaling pathway known as mTOR, which is involved in cell growth and metabolism. One of mTOR’s roles is to regulate the translation of messenger RNA into protein, so when it’s activated, cells produce more protein. This protein synthesis is essential for stem cells to proliferate.

The researchers showed that mTOR activation in these stem cells also led to production of large quantities of polyamines — small molecules that help cells to grow and divide.

“In the refed state, you’ve got more proliferation, and you need to build cellular mass. That requires more protein, to build new cells, and those stem cells go on to build more differentiated cells or specialized intestinal cell types that line the intestine,” Khawaled says.

Too much of a good thing

The researchers also found that when stem cells are in this highly regenerative state, they are more prone to become cancerous. Intestinal stem cells are among the most actively dividing cells in the body, as they help the lining of the intestine completely turn over every five to 10 days. Because they divide so frequently, these stem cells are the most common source of precancerous cells in the intestine.

In this study, the researchers discovered that if they turned on a cancer-causing gene in the mice during the refeeding stage, they were much more likely to develop precancerous polyps than if the gene was turned on during the fasting state. Cancer-linked mutations that occurred during the refeeding state were also much more likely to produce polyps than mutations that occurred in mice that did not undergo the cycle of fasting and refeeding.

“I want to emphasize that this was all done in mice, using very well-defined cancer mutations. In humans it’s going to be a much more complex state,” Yilmaz says. “But it does lead us to the following notion: Fasting is very healthy, but if you’re unlucky and you’re refeeding after a fasting, and you get exposed to a mutagen, like a charred steak or something, you might actually be increasing your chances of developing a lesion that can go on to give rise to cancer.”

Yilmaz also noted that the regenerative benefits of fasting could be significant for people who undergo radiation treatment, which can damage the intestinal lining, or other types of intestinal injury. His lab is now studying whether polyamine supplements could help to stimulate this kind of regeneration, without the need to fast.

“This fascinating study provides insights into the complex interplay between food consumption, stem cell biology, and cancer risk,” says Ophir Klein, a professor of medicine at the University of California at San Francisco and Cedars-Sinai Medical Center, who was not involved in the study. “Their work lays a foundation for testing polyamines as compounds that may augment intestinal repair after injuries, and it suggests that careful consideration is needed when planning diet-based strategies for regeneration to avoid increasing cancer risk.”

The research was funded, in part, by Pew-Stewart Scholars Program for Cancer Research award, the MIT Stem Cell Initiative, the Koch Institute Frontier Research Program via the Kathy and Curt Marble Cancer Research Fund, and the Bridge Project, a partnership between the Koch Institute for Integrative Cancer Research at MIT and the Dana-Farber/Harvard Cancer Center.


MIT study explains why laws are written in an incomprehensible style

The convoluted “legalese” used in legal documents conveys a special sense of authority, and even non-lawyers have learned to wield it.


Legal documents are notoriously difficult to understand, even for lawyers. This raises the question: Why are these documents written in a style that makes them so impenetrable?

MIT cognitive scientists believe they have uncovered the answer to that question. Just as “magic spells” use special rhymes and archaic terms to signal their power, the convoluted language of legalese acts to convey a sense of authority, they conclude.

In a study appearing this week in the journal of the Proceedings of the National Academy of Sciences, the researchers found that even non-lawyers use this type of language when asked to write laws.

“People seem to understand that there’s an implicit rule that this is how laws should sound, and they write them that way,” says Edward Gibson, an MIT professor of brain and cognitive sciences and the senior author of the study.

Eric Martinez PhD ’24 is the lead author of the study. Francis Mollica, a lecturer at the University of Melbourne, is also an author of the paper.

Casting a legal spell

Gibson’s research group has been studying the unique characteristics of legalese since 2020, when Martinez came to MIT after earning a law degree from Harvard Law School. In a 2022 study, Gibson, Martinez, and Mollica analyzed legal contracts totaling about 3.5 million words, comparing them with other types of writing, including movie scripts, newspaper articles, and academic papers.

That analysis revealed that legal documents frequently have long definitions inserted in the middle of sentences — a feature known as “center-embedding.” Linguists have previously found that this kind of structure can make text much more difficult to understand.

“Legalese somehow has developed this tendency to put structures inside other structures, in a way which is not typical of human languages,” Gibson says.

In a follow-up study published in 2023, the researchers found that legalese also makes documents more difficult for lawyers to understand. Lawyers tended to prefer plain English versions of documents, and they rated those versions to be just as enforceable as traditional legal documents.

“Lawyers also find legalese to be unwieldy and complicated,” Gibson says. “Lawyers don’t like it, laypeople don’t like it, so the point of this current paper was to try and figure out why they write documents this way.”

The researchers had a couple of hypotheses for why legalese is so prevalent. One was the “copy and edit hypothesis,” which suggests that legal documents begin with a simple premise, and then additional information and definitions are inserted into already existing sentences, creating complex center-embedded clauses.

“We thought it was plausible that what happens is you start with an initial draft that’s simple, and then later you think of all these other conditions that you want to include. And the idea is that once you’ve started, it’s much easier to center-embed that into the existing provision,” says Martinez, who is now a fellow and instructor at the University of Chicago Law School.

However, the findings ended up pointing toward a different hypothesis, the so-called “magic spell hypothesis.” Just as magic spells are written with a distinctive style that sets them apart from everyday language, the convoluted style of legal language appears to signal a special kind of authority, the researchers say.

“In English culture, if you want to write something that’s a magic spell, people know that the way to do that is you put a lot of old-fashioned rhymes in there. We think maybe center-embedding is signaling legalese in the same way,” Gibson says.

In this study, the researchers asked about 200 non-lawyers (native speakers of English living in the United States, who were recruited through a crowdsourcing site called Prolific), to write two types of texts. In the first task, people were told to write laws prohibiting crimes such as drunk driving, burglary, arson, and drug trafficking. In the second task, they were asked to write stories about those crimes.

To test the copy and edit hypothesis, half of the participants were asked to add additional information after they wrote their initial law or story. The researchers found that all of the subjects wrote laws with center-embedded clauses, regardless of whether they wrote the law all at once or were told to write a draft and then add to it later. And, when they wrote stories related to those laws, they wrote in much plainer English, regardless of whether they had to add information later.

“When writing laws, they did a lot of center-embedding regardless of whether or not they had to edit it or write it from scratch. And in that narrative text, they did not use center-embedding in either case,” Martinez says.

In another set of experiments, about 80 participants were asked to write laws, as well as descriptions that would explain those laws to visitors from another country. In these experiments, participants again used center-embedding for their laws, but not for the descriptions of those laws.

The origins of legalese

Gibson’s lab is now investigating the origins of center-embedding in legal documents. Early American laws were based on British law, so the researchers plan to analyze British laws to see if they feature the same kind of grammatical construction. And going back much farther, they plan to analyze whether center-embedding is found in the Hammurabi Code, the earliest known set of laws, which dates to around 1750 BC.

“There may be just a stylistic way of writing from back then, and if it was seen as successful, people would use that style in other languages,” Gibson says. “I would guess that it’s an accidental property of how the laws were written the first time, but we don’t know that yet.”

The researchers hope that their work, which has identified specific aspects of legal language that make it more difficult to understand, will motivate lawmakers to try to make laws more comprehensible. Efforts to write legal documents in plainer language date to at least the 1970s, when President Richard Nixon declared that federal regulations should be written in “layman’s terms.” However, legal language has changed very little since that time.

“We have learned only very recently what it is that makes legal language so complicated, and therefore I am optimistic about being able to change it,” Gibson says. 


When the lights turned on in the universe

By studying ancient, supermassive black holes called quasars, Dominika Ďurovčíková is illuminating an early moment when galaxies could first be observed.


Watching crowds of people hustle along Massachusetts Avenue from her window seat in MIT’s student center, Dominika Ďurovčíková has just one wish.

“What I would really like to do is convince a city to shut down their lights completely, apart from hospitals or whatever else needs them, just for an hour,” she says. “Let people see the Milky Way, or the stars. It influences you. You realize there’s something more than your everyday struggles.”

Even with a lifetime of gazing into the cosmos under her belt — with the last few years spent pursuing a PhD with professors Anna-Christina Eilers and Robert Simcoe at MIT’s Kavli Institute for Astrophysics and Space Research — she still believes in the power of looking up at the night sky with the naked eye.

Most of the time, however, she’s using tools a lot more powerful than that. The James Webb Space Telescope has begun providing rich data from bodies at the very edge of the universe, exactly where she wants to be looking. With data from the JSWT and the ground-based Magellan telescopes in Chile, Ďurovčíková is on the hunt for distant quasars — ancient, supermassive black holes that emit intense amounts of light — and the farther away they are, the more information they provide about the very early universe.

“These objects are really, really bright, and that means that they’re really useful for studying the universe from very far away,” she says. “They’re like beacons from the past that you can still see, and they can tell you something about the universe at that stage. It’s almost like archaeology.”

Her recent research has focused on what’s known as the Epoch of Reionization. It’s the period of time when the radiation from quasars, stars, galaxies and other light-emitting bodies were able to penetrate through the dark clouds of hydrogen atoms left over from the Big Bang, and shine their light through space.

“Reionization was a phase transition where all the stuff around galaxies suddenly became transparent,” she says. “Finally, we could see light that was otherwise absorbed by neutral hydrogen.”

One of her goals is to help discover what caused the reionization process to start in the first place. While the astrophysical community has determined a loose time frame, there are many unanswered questions surrounding the Epoch of Reionization, and she hopes her quasar research can help solve some of them.

“The grand hope is that if you know the timing of reionization, that can inform you about the sources that caused it in the first place,” she says. “We’re not quite there, but looking at quasars could be a way to do it.”

Time and distance on a cosmic scale

The quasars that Ďurovčíková has been most interested in are classified as “high-redshift.” Redshift is a measure of how much a wave’s frequency has decreased, and in an astrophysical context, it can be used to determine how long a wave of light has been traveling and how far away its source is, while accounting for the expansion of the universe.

“The higher the redshift, the closer to the beginning of the universe you get,” Ďurovčíková explains.

Research has shown that reionization began roughly 150 million years after the Big Bang, and approximately 850 million years after that, the dark hydrogen clouds that made up the “intergalactic medium,” or IGM, were fully ionized.

For her most recent paper, Ďurovčíková examined a set of 18 quasars whose light began traveling between approximately 770 million and 950 million years after the Big Bang. She and her collaborators, including scientists from four different countries, sorted the quasars into three “bins” based on distance, to compare the amount of neutral hydrogen in the IGM at different epochs. These amounts helped refine the timing of reionization and confirmed that data from quasars are consistent with data from other types of bodies.

“The story we have so far,” Ďurovčíková says, “is that at some point by redshift 5 or 6, the stuff in between galaxies was overall ionized. However, it’s not clear what type of star or what type of galaxy is more responsible for this global phase transition, which affected the whole universe.”

A closely related facet of her research — and one she’s planning on exploring further as she composes her thesis — is on how these quasars came to be in the first place. They’re so old, and so massive, that they challenge the current conceptions of how old the universe is. The light they generate comes from the immense gravitational force they exert on the plasma they absorb, and if they were already large enough to do that billions of years ago, just how long ago did they start forming?

“These black holes seem to be too massive to be grown in the time that their spectra seem to indicate,” she says. “Is there something in our way that’s obscuring the rest of the growth? We’re looking at different methods to measure their lifetime.”

Eyes towards the stars, feet grounded on Earth

In the meantime, Ďurovčíková is also working to encourage the next generation of astrophysicists. She says she was fortunate to have encouraging parents and mentors who showed her academic and career paths she hadn’t even considered, and she co-founded a nonprofit organization called Encouraging Women Across All Borders to do the same for students across the globe.

“In your life, you will see a lot of doors,” she says. “There’s doors that you’ll see are open, and there’s doors you’ll see are closed. The biggest tragedy, though, is that there are so many doors that you don’t even know exist.”

She knows the feeling all too well. Growing up in Slovakia meant the primary options were attending university in either Bratislava, the capital, or Prague, in the neighboring Czech Republic. Her love of math and physics inspired her to enroll in the International Baccalaureate program, however, and it was in that program that she met a teacher, named Eva Žitná, who “planted the seeds” that eventually sent her to Oxford for a four-year master’s program.

“Just being in the IB program environment started to open up these possibilities I had not considered before,” she says. “Both my parents and I started talking to Žitná about how this could be an interesting possibility, and somehow one thing led to another.”

While she takes great pleasure in guiding students along the same path she once took, equally as rewarding for her are the moments when she can see people realizing just how big the universe is. As a co-director of the MIT Astrogazers, she has witnessed many such moments. She remembers handing out eclipse glasses at the Cambridge Science Festival in preparation for last October’s partial solar eclipse, and recalls kids and adults alike with their necks craned upward, sharing the same look of wonder on their faces.

“The reason I care is because we all get caught up in small things in life very easily,” she says. “The traffic sucks. The T isn’t working. Then, you look up at the sky and you realize there’s something much more beautiful and much bigger than all these little things.”


Building bidirectional bridges

MIT’s Office of Graduate Education hosts Summit on Creating Inclusive Pathways to the PhD


In June 2023, after the U.S. Supreme Court ruled that colleges and universities could no longer use race as a factor in their admission decisions, many higher education institutions across the United States faced the same challenge: how to maintain diversity in their student bodies. So Noelle Wakefield, director of MIT’s Summer Research Program (MSRP) and assistant dean for diversity initiatives in MIT’s Office of Graduate Education (OGE), started planning.

On July 31, a little more than a year after the decision was released, the OGE hosted the inaugural Inclusive Pathways to the PhD Summit, which brought representatives from nearly 20 minority-serving institutions (MSIs), including several historically Black colleges and universities (HBCUs), to Cambridge, Massachusetts, to meet with MIT administrators, faculty, and doctoral students. The admission question — how to continue attracting a diverse cohort of graduate students with the new legal restrictions? — was only the first of many that framed a broader and more complex picture.

“What are fresh ways for us to find talent in places that aren’t typically represented at MIT?” Wakefield asks. “How can we form partnerships with institutions that aren’t already part of our ecosystem? What is the formula for partnerships where both institutions benefit and feel good about the work that is happening?”

These aren’t new outreach questions for MIT, Wakefield says, but the changing admissions landscape sparked a need for the Institute to “be more thoughtful.”

And a need to clear up misperceptions, adds Denzil Streete, senior associate dean and director of the OGE. “MIT faculty may have outdated perspectives about HBCUs and MSIs,” he says. “And our visitors may be relying on historical knowledge of MIT that is largely negative” when it comes to attracting graduate applications from smaller, lesser-known colleges and universities. The summit was designed to be a first step in demystifying these assumptions and in establishing “a common platform and a shared understanding for moving forward,” Streete says.

For decades, the OGE has focused its HBCU and MSI outreach efforts on student recruitment, but the summit signals a broadening of that approach to include faculty and staff mentors — the people Wakefield describes as “levers for decision-making” among prospective graduate students. Streete says, “if we at MIT say we attract the best and brightest in the world and we don’t include these institutions, then our supposition comes into question.”

The summit agenda included information sessions about navigating the MIT graduate admission process and finding research opportunities for undergraduates, as well as conversations with current MIT doctoral students who’d graduated from the MSIs represented at the summit. There was a campus tour, a poster session by students in the MIT Summer Research Program, and a panel discussion on forming reciprocal relationships with HBCUs and MSIs, featuring visitors from Spelman College, Prairie View A & M University, and the University of Puerto Rico, among others.

That discussion resonated with visitor Gwendolyn Scott-Jones, dean of the Wesley College of Health and Behavioral Sciences at Delaware State University. “It felt like an authentic discussion about the disparities and lack of equal resources that HBCUs historically contend with compared to predominantly white institutions,” she observes. “HBCUs have been known to do more with less and have produced very talented professionals, and I believe MIT is trying to provide HBCUs with access and opportunity.”

One of the summit’s goals was to begin ensuring that this access and opportunity would be “bidirectional” — going both ways between an institution like MIT and an HCBU like Lincoln University in Pennsylvania, where Christina Chisholm, one of the panelists, did her undergraduate work. Collaborations “aren’t spaces in which you’re just throwing money at something to fix it, or to bridge a gap,” says Chisholm, a biophysicist who’s now director of the McNair Scholars Program and Thrive Student Support Services at Rutgers University.

Instead, she advised, focus on cooperation, coordination, and positive mentorship. Tiffany Oliver, a biology professor at Spelman, recalled a potential student-research project she was exploring with a partner at a larger institution who would host her students in his lab. “His attitude was, ‘We have the money so we’re going to tell you what you need to do.’” she recalls. “That’s a reflection of how you’re going to treat my students, and I would rather send my students to some other place if the people show that they care. I want my students to leave school still loving science, not tarnished by science.”

Another piece of advice came from Kareem McLemore, assistant vice president of strategic enrollment management at Delaware State. “When you’re partnering with us, the first thing we’re going to ask is, ‘Are you doing this to check a box?’” he says. “If it’s a checkbox, we don’t want it. We want to know what the objectives are, the key goals, the KPIs [key performance indicators]. You may have the money, but think about the resources we have as HBCUs that can help you raise your brand. We have to ride the wave together.”

The summit served as a starting point: a way to build trust among institutions with different histories and resources, and to stimulate ideas for future partnerships, whether that means a joint research project, a shared curriculum, or a faculty exchange.

“We all understand that talent is everywhere but opportunity is not distributed in the same manner,” says Bryan Thomas Jr., assistant dean for diversity, equity, and inclusion at the MIT Sloan School of Management and a co-organizer of the event. Broadening MIT’s networks through the Inclusive Pathways Summit means “expanding our ecosystem of opportunity, collaboration, and adding new ways of solving problems,” he says. “And that ultimately benefits all of us.”


Study: Rocks from Mars’ Jezero Crater, which likely predate life on Earth, contain signs of water

The presence of organic matter is inconclusive, but the rocks could be scientists’ best chance at finding remnants of ancient Martian life.


In a new study appearing today in the journal AGU Advances, scientists at MIT and NASA report that seven rock samples collected along the “fan front” of Mars’ Jezero Crater contain minerals that are typically formed in water. The findings suggest that the rocks were originally deposited by water, or may have formed in the presence of water.

The seven samples were collected by NASA’s Perseverance rover in 2022 during its exploration of the crater’s western slope, where some rocks were hypothesized to have formed in what is now a dried-up ancient lake. Members of the Perseverance science team, including MIT scientists, have studied the rover’s images and chemical analyses of the samples, and confirmed that the rocks indeed contain signs of water, and that the crater was likely once a watery, habitable environment.

Whether the crater was actually inhabited is yet unknown. The team found that the presence of organic matter — the starting material for life — cannot be confirmed, at least based on the rover’s measurements. But judging from the rocks’ mineral content, scientists believe the samples are their best chance of finding signs of ancient Martian life once the rocks are returned to Earth for more detailed analysis.

“These rocks confirm the presence, at least temporarily, of habitable environments on Mars,” says the study’s lead author, Tanja Bosak, professor of geobiology in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “What we’ve found is that indeed there was a lot of water activity. For how long, we don’t know, but certainly for long enough to create these big sedimentary deposits.”

What’s more, some of the collected samples may have originally been deposited in the ancient lake more than 3.5 billion years ago — before even the first signs of life on Earth.

“These are the oldest rocks that may have been deposited by water, that we’ve ever laid hands or rover arms on,” says co-author Benjamin Weiss, the Robert R. Shrock Professor of Earth and Planetary Sciences at MIT. “That’s exciting, because it means these are the most promising rocks that may have preserved fossils, and signatures of life.”

The study’s MIT co-authors include postdoc Eva Scheller, and research scientist Elias Mansbach, along with members of the Perseverance science team.

At the front

The new rock samples were collected in 2022 as part of the rover’s Fan Front Campaign — an exploratory phase during which Perseverance traversed Jezero Crater’s western slope, where a fan-like region contains sedimentary, layered rocks. Scientists suspect that this “fan front” is an ancient delta that was created by sediment that flowed with a river and settled into a now bone-dry lakebed. If life existed on Mars, scientists believe that it could be preserved in the layers of sediment along the fan front.

In the end, Perseverance collected seven samples from various locations along the fan front. The rover obtained each sample by drilling into the Martian bedrock and extracting a pencil-sized core, which it then sealed in a tube to one day be retrieved and returned to Earth for detailed analysis.

Prior to extracting the cores, the rover took images of the surrounding sediments at each of the seven locations. The science team then processed the imaging data to estimate a sediment’s average grain size and mineral composition. This analysis showed that all seven collected samples likely contain signs of water, suggesting that they were initially deposited by water.

Specifically, Bosak and her colleagues found evidence of certain minerals in the sediments that are known to precipitate out of water.

“We found lots of minerals like carbonates, which are what make reefs on Earth,” Bosak says. “And it’s really an ideal material that can preserve fossils of microbial life.”

Interestingly, the researchers also identified sulfates in some samples that were collected at the base of the fan front. Sulfates are minerals that form in very salty water — another sign that water was present in the crater at one time — though very salty water, Bosak notes, “is not necessarily the best thing for life.” If the entire crater was once filled with very salty water, then it would be difficult for any form of life to thrive. But if only the bottom of the lake were briny, that could be an advantage, at least for preserving any signs of life that may have lived further up, in less salty layers, that eventually died and drifted down to the bottom.

“However salty it was, if there were any organics present, it's like pickling something in salt,” Bosak says. “If there was life that fell into the salty layer, it would be very well-preserved.”

Fuzzy fingerprints

But the team emphasizes that organic matter has not been confidently detected by the rover’s instruments. Organic matter can be signs of life, but can also be produced by certain geological processes that have nothing to do with living matter. Perseverance’s predecessor, the Curiosity rover, had detected organic matter throughout Mars’ Gale Crater, which scientists suspect may have come from asteroids that made impact with Mars in the past.

And in a previous campaign, Perseverance detected what appeared to be organic molecules at multiple locations along Jezero Crater’s floor. These observations were taken by the rover’s Scanning Habitable Environments with Raman and Luminescence for Organics and Chemicals (SHERLOC) instrument, which uses ultraviolet light to scan the Martian surface. If organics are present, they can glow, similar to material under a blacklight. The wavelengths at which the material glows act as a sort of fingerprint for the kind of organic molecules that are present.

In Perseverance’s previous exploration of the crater floor, SHERLOC appeared to pick up signs of organic molecules throughout the region, and later, at some locations along the fan front. But a careful analysis, led by MIT’s Eva Scheller, has found that while the particular wavelengths observed could be signs of organic matter, they could just as well be signatures of substances that have nothing to do with organic matter.

“It turns out that cerium metals incorporated in minerals actually produce very similar signals as the organic matter,” Scheller says. “When investigated, the potential organic signals were strongly correlated with phosphate minerals, which always contain some cerium.”

Scheller’s work shows that the rover’s measurements cannot be interpreted definitively as organic matter.

“This is not bad news,” Bosak says. “It just tells us there is not very abundant organic matter. It’s still possible that it’s there. It’s just below the rover’s detection limit.”

When the collected samples are finally sent back to Earth, Bosak says laboratory instruments will have more than enough sensitivity to detect any organic matter that might lie within.

“On Earth, once we have microscopes with nanometer-scale resolution, and various types of instruments that we cannot staff on one rover, then we can actually attempt to look for life,” she says.

This work was supported, in part, by NASA.


Study reveals ways in which 40Hz sensory stimulation may preserve brain’s “white matter”

Gamma frequency light and sound stimulation preserves myelination in mouse models and reveals molecular mechanisms that may underlie the benefit.


Early-stage trials in Alzheimer’s disease patients and studies in mouse models of the disease have suggested positive impacts on pathology and symptoms from exposure to light and sound presented at the “gamma” band frequency of 40 hertz (Hz). A new study zeroes in on how 40Hz sensory stimulation helps to sustain an essential process in which the signal-sending branches of neurons, called axons, are wrapped in a fatty insulation called myelin. Often called the brain’s “white matter,” myelin protects axons and insures better electrical signal transmission in brain circuits.

“Previous publications from our lab have mainly focused on neuronal protection,” says Li-Huei Tsai, Picower Professor in The Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences at MIT and senior author of the new open-access study in Nature Communications. Tsai also leads MIT’s Aging Brain Initiative. “But this study shows that it’s not just the gray matter, but also the white matter that’s protected by this method.”

This year Cognito Therapeutics, the spinoff company that licensed MIT’s sensory stimulation technology, published phase II human trial results in the Journal of Alzheimer’s Disease indicating that 40Hz light and sound stimulation significantly slowed the loss of myelin in volunteers with Alzheimer’s. Also this year, Tsai’s lab published a study showing that gamma sensory stimulation helped mice withstand neurological effects of chemotherapy medicines, including by preserving myelin. In the new study, members of Tsai’s lab led by former postdoc Daniela Rodrigues Amorim used a common mouse model of myelin loss — a diet with the chemical cuprizone — to explore how sensory stimulation preserves myelination.

Amorim and Tsai’s team found that 40Hz light and sound not only preserved myelination in the brains of cuprizone-exposed mice, it also appeared to protect oligodendrocytes (the cells that myelinate neural axons), sustain the electrical performance of neurons, and preserve a key marker of axon structural integrity. When the team looked into the molecular underpinnings of these benefits, they found clear signs of specific mechanisms including preservation of neural circuit connections called synapses; a reduction in a cause of oligodendrocyte death called “ferroptosis;” reduced inflammation; and an increase in the ability of microglia brain cells to clean up myelin damage so that new myelin could be restored.

“Gamma stimulation promotes a healthy environment,” says Amorim, who is now a Marie Curie Fellow at the University of Galway in Ireland. “There are several ways we are seeing different effects.”

The findings suggest that gamma sensory stimulation may help not only Alzheimer’s disease patients but also people battling other diseases involving myelin loss, such as multiple sclerosis, the authors wrote in the study.

Maintaining myelin

To conduct the study, Tsai and Amorim’s team fed some male mice a diet with cuprizone and gave other male mice a normal diet for six weeks. Halfway into that period, when cuprizone is known to begin causing its most acute effects on myelination, they exposed some mice from each group to gamma sensory stimulation for the remaining three weeks. In this way they had four groups: completely unaffected mice, mice that received no cuprizone but did get gamma stimulation, mice that received cuprizone and constant (but not 40Hz) light and sound as a control, and mice that received cuprizone and also gamma stimulation.

After the six weeks elapsed, the scientists measured signs of myelination throughout the brains of the mice in each group. Mice that weren’t fed cuprizone maintained healthy levels, as expected. Mice that were fed cuprizone and didn’t receive 40Hz gamma sensory stimulation showed drastic levels of myelin loss. Cuprizone-fed mice that received 40Hz stimulation retained significantly more myelin, rivaling the health of mice never fed cuprizone by some, but not all, measures.

The researchers also looked at numbers of oligodendrocytes to see if they survived better with sensory stimulation. Several measures revealed that in mice fed cuprizone, oligodendrocytes in the corpus callosum region of the brain (a key point for the transit of neural signals because it connects the brain’s hemispheres) were markedly reduced. But in mice fed cuprizone and also treated with gamma stimulation, the number of cells were much closer to healthy levels.

Electrophysiological tests among neural axons in the corpus callosum showed that gamma sensory stimulation was associated with improved electrical performance in cuprizone-fed mice who received gamma stimulation compared to cuprizone-fed mice left untreated by 40Hz stimulation. And when researchers looked in the anterior cingulate cortex region of the brain, they saw that MAP2, a protein that signals the structural integrity of axons, was much better preserved in mice that received cuprizone and gamma stimulation compared to cuprizone-fed mice who did not.

A key goal of the study was to identify possible ways in which 40Hz sensory stimulation may protect myelin.

To find out, the researchers conducted a sweeping assessment of protein expression in each mouse group and identified which proteins were differentially expressed based on cuprizone diet and exposure to gamma frequency stimulation. The analysis revealed distinct sets of effects between the cuprizone mice exposed to control stimulation and cuprizone-plus-gamma mice.

A highlight of one set of effects was the increase in MAP2 in gamma-treated cuprizone-fed mice. A highlight of another set was that cuprizone mice who received control stimulation showed a substantial deficit in expression of proteins associated with synapses. The gamma-treated cuprizone-fed mice did not show any significant loss, mirroring results in a 2019 Alzheimer’s 40Hz study that showed synaptic preservation. This result is important, the researchers wrote, because neural circuit activity, which depends on maintaining synapses, is associated with preserving myelin. They confirmed the protein expression results by looking directly at brain tissues.

Another set of protein expression results hinted at another important mechanism: ferroptosis. This phenomenon, in which errant metabolism of iron leads to a lethal buildup of reactive oxygen species in cells, is a known problem for oligodendrocytes in the cuprizone mouse model. Among the signs was an increase in cuprizone-fed, control stimulation mice in expression of the protein HMGB1, which is a marker of ferroptosis-associated damage that triggers an inflammatory response. Gamma stimulation, however, reduced levels of HMGB1.

Looking more deeply at the cellular and molecular response to cuprizone demyelination and the effects of gamma stimulation, the team assessed gene expression using single-cell RNA sequencing technology. They found that astrocytes and microglia became very inflammatory in cuprizone-control mice but gamma stimulation calmed that response. Fewer cells became inflammatory and direct observations of tissue showed that microglia became more proficient at clearing away myelin debris, a key step in effecting repairs.

The team also learned more about how oligodendrocytes in cuprizone-fed mice exposed to 40Hz sensory stimulation managed to survive better. Expression of protective proteins such as HSP70 increased and as did expression of GPX4, a master regulator of processes that constrain ferroptosis.

In addition to Amorim and Tsai, the paper’s other authors are Lorenzo Bozzelli, TaeHyun Kim, Liwang Liu, Oliver Gibson, Cheng-Yi Yang, Mitch Murdock, Fabiola Galiana-Meléndez, Brooke Schatz, Alexis Davison, Md Rezaul Islam, Dong Shin Park, Ravikiran M. Raju, Fatema Abdurrob, Alissa J. Nelson, Jian Min Ren, Vicky Yang and Matthew P. Stokes.

Fundacion Bancaria la Caixa, The JPB Foundation, The Picower Institute for Learning and Memory, the Carol and Gene Ludwig Family Foundation, Lester A. Gimpelson, Eduardo Eurnekian, The Dolby Family, Kathy and Miguel Octavio, the Marc Haas Foundation, Ben Lenail and Laurie Yoler, and the U.S. National Institutes of Health provided funding for the study.


MIT chemists synthesize plant-derived molecules that hold potential as pharmaceuticals

Large multi-ring-containing molecules known as oligocyclotryptamines have never been produced in the lab until now.


MIT chemists have developed a new way to synthesize complex molecules that were originally isolated from plants and could hold potential as antibiotics, analgesics, or cancer drugs.

These compounds, known as oligocyclotryptamines, consist of multiple tricyclic substructures called cyclotryptamine, fused together by carbon–carbon bonds. Only small quantities of these compounds are naturally available, and synthesizing them in the lab has proven difficult. The MIT team came up with a way to add tryptamine-derived components to a molecule one at a time, in a way that allows the researchers to precisely assemble the rings and control the 3D orientation of each component as well as the final product.

“For many of these compounds, there hasn’t been enough material to do a thorough review of their potential. I’m hopeful that having access to these compounds in a reliable way will enable us to do further studies,” says Mohammad Movassaghi, an MIT professor of chemistry and the senior author of the new study.

In addition to allowing scientists to synthesize oligocyclotryptamines found in plants, this approach could also be used to generate new variants that may have even better medicinal properties, or molecular probes that can help to reveal their mechanism of action.

Tony Scott PhD ’23 is the lead author of the paper, which appears today in the Journal of the American Chemical Society.

Fusing rings

Oligocyclotryptamines belong to a class of molecules called alkaloids — nitrogen-containing organic compounds produced mainly by plants. At least eight different oligocyclotryptamines have been isolated from a genus of flowering plants known as Psychotria, most of which are found in tropical forests.

Since the 1950s, scientists have studied the structure and synthesis of dimeric cyclotryptamines, which have two cyclotryptamine subunits. Over the past 20 years, significant progress has been made characterizing and synthesizing dimers and other smaller members of the family. However, no one has been able to synthesize the largest oligocyclotryptamines, which have six or seven rings fused together.

One of the hurdles in synthesizing these molecules is a step that requires formation of a bond between a carbon atom of one tryptamine-derived subunit to a carbon atom of the next subunit. The oligocyclotryptamines have two types of these linkages, both containing at least one carbon atom that has bonds with four other carbons. That extra bulk makes those carbon atoms less accessible to undergo reactions, and controlling the stereochemistry — the orientation of the atoms around the carbon — at all these junctures poses a significant challenge.

For many years, Movassaghi’s lab has been developing ways to form carbon-carbon bonds between carbon atoms that are already crowded with other atoms. In 2011, they devised a method that involves transforming the two carbon atoms into carbon radicals (carbon atoms with one unpaired electron) and directing their union. To create these radicals, and guide the paired union to be completely selective, the researchers first attach each of the targeted carbon atoms to a nitrogen atom; these two nitrogen atoms bind to each other.

When the researchers shine certain wavelengths of light on the substrate containing the two fragments linked via the two nitrogen atoms, it causes the two atoms of nitrogen to break away as nitrogen gas, leaving behind two very reactive carbon radicals in close proximity that join together almost immediately. This type of bond formation has also allowed the researchers to control the molecules’ stereochemistry.

Movassaghi demonstrated this approach, which he calls diazene-directed assembly, by synthesizing other types of alkaloids, including the communesins. These compounds are found in fungi and consist of two ring-containing molecules, or monomers, joined together. Later, Movassaghi began using this approach to fuse larger numbers of monomers, and he and Scott eventually turned their attention to the largest oligocyclotryptamine alkaloids.

The synthesis that they developed begins with one molecule of cyclotryptamine derivative, to which additional cyclotryptamine fragments with correct relative stereochemistry and position selectivity are added, one at a time. Each of these additions is made possible by the diazene-directed process that Movassaghi’s lab previously developed.

“The reason why we’re excited about this is that this single solution allowed us to go after multiple targets,” Movassaghi says. “That same route provides us a solution to multiple members of the natural product family because by extending the iteration one more cycle, your solution is now applied to a new natural product.”

“A tour de force”

Using this approach, the researchers were able to create molecules with six or seven cyclotryptamine rings, which has never been done before.

“Researchers worldwide have been trying to find a way to make these molecules, and Movassaghi and Scott are the first to pull it off,” says Seth Herzon, a professor of chemistry at Yale University, who was not involved in the research. Herzon described the work as “a tour de force in organic synthesis.”

Now that the researchers have synthesized these naturally occurring oligocyclotryptamines, they should be able to generate enough of the compounds that their potential therapeutic activity can be more thoroughly investigated.

They should also be able to create novel compounds by switching in slightly different cyclotryptamine subunits, Movassaghi says.

“We will continue to use this very precise way of adding these cyclotryptamine units to assemble them together into complex systems that have not been addressed yet, including derivatives that could potentially have improved properties,” he says.

The research was funded by the U.S. National Institute of General Medical Sciences.


Alex Shalek named director of the Institute for Medical Engineering and Science

Professor who uses a cross-disciplinary approach to understand human diseases on a molecular and cellular level succeeds Elazer Edelman.


Alex K. Shalek, the J. W. Kieckhefer Professor in the MIT Institute for Medical Engineering and Sciences (IMES) and Department of Chemistry, has been named the new director of IMES, effective Aug. 1.

“Professor Shalek’s substantial contributions to the scientific community as a researcher and educator have been exemplary. His extensive network across MIT, Harvard, and Mass General Brigham will be a tremendous asset as director of IMES,” says Anantha Chandrakasan, chief innovation and strategy officer, dean of the School of Engineering, and the Vannevar Bush Professor of Electrical Engineering and Computer Science. “He will undoubtedly be an excellent leader, bringing his innovative approach and collaborative spirit to this new role.”

Shalek is a core member of IMES, a professor of chemistry, and holds several leadership positions, including director of the Health Innovation Hub. He is also an extramural member of MIT’s Koch Institute for Integrative Cancer Research; a member of the Ragon Institute of Mass General, MIT, and Harvard; an institute member of the Broad Institute of MIT and Harvard; an assistant in immunology at Mass General Brigham; and an instructor in health sciences and technology at Harvard Medical School.

The Shalek Lab’s research seeks to uncover how communities of cells work together within human tissues to support health, and how they become dysregulated in disease. By developing and applying innovative experimental and computational technologies, they are shedding light on a wide range of human health conditions.

Shalek and his team use a cross-disciplinary approach that combines genomics, chemical biology, and nanotechnology to develop platforms to profile and control cells and their interactions. Collaborating with researchers across the globe, they apply these tools to study human diseases in great detail. Their goal is to connect what occurs at a cellular level with what medical professionals observe in patients, paving the way for more precise ways to prevent and treat diseases. 

Over the course of his career, Shalek’s groundbreaking research has earned him widespread recognition and numerous awards and honors. These include an NIH New Innovator Award, a Beckman Young Investigator Award, a Searle Scholar Award, a Pew-Stewart Scholar Award, an Alfred P. Sloan Research Fellowship in Chemistry, and an Avant-Garde (DP1 Pioneer) Award. Shalek has also been celebrated for his dedication as a faculty member, educator, and mentor. He was awarded the 2019-20 Harold E. Edgerton Faculty Achievement Award at MIT and the 2020 HMS Young Mentor Award.

Shalek received his bachelor’s degree in chemical physics from Columbia University and his master’s and PhD in chemical physics from Harvard University. Prior to joining MIT’s faculty in 2014, he was a postdoc at the Broad Institute.

Shalek succeeds Elazer Edelman, the Edward J. Poitras Professor in Medical Engineering and Science, who has led IMES since April 2018.

“I am grateful to Professor Edelman for his incredible leadership and service to IMES over the past six years,” says Chandrakasan. “His contributions to IMES have been invaluable, and we are thankful for his dedication and vision during his tenure as director.”


Empowering the next generation of scientists in Africa

The Future African Scientist organization was sparked by a connection between two students from different walks of life during an MIT program in South Africa.


No one is born a world-class scientist. Instead, their skills are built over many years of education, networking, mentorship, and work in laboratories or in the field.

That’s the fundamental insight behind the not-for-profit organization Future African Scientist, which is seeking to unleash the scientific potential of the continent by providing African students and early-career scientists with the support they need to do world-renowned research that addresses problems in their local communities and beyond.

Future African Scientist, or FAS, partners with leading scientists and institutions around the world, including MIT, to offer educational courses, training, networking events, and other programming around scientific research and entrepreneurship. More importantly, graduates of FAS programs join a network of scientists that helps them match with jobs, internships, and further learning opportunities.

“Our programs aim to democratize access to science education and create a new wave of scientists that are going to study African problems and not just publish papers, but also translate that research into beneficial products as well as policies,” says FAS co-founder Martin Lubowa.

At the core of FAS is a belief in the power of connections to further scientific understanding. Perhaps it’s no surprise, then, that FAS began with a connection between two people from very different walks of life during an MIT program.

From roommates to co-founders

In 2020, Daniel Zhang ’22 participated in Biology Professor Bruce Walker’s course HST. 434 (Evolution of an Epidemic) as part of a MISTI Global Classroom during MIT’s Independent Activities Period (IAP). The course immerses students in a South African community to teach them about the AIDS epidemic from the perspective of doctors, researchers, policymakers, and local infected women.

That IAP happened to be the first year the class paired MIT students with students from the African Leadership Academy, which seeks to build leadership skills in African youth. Zhang’s roommate was Martin Lubowa.

“Martin and I bonded instantly despite coming from completely different cultures and backgrounds,” Zhang recalls. “We shared passions for education, mentorship, and sports.”

Despite waking up early each day for class, Zhang and Lubowa talked late into the nights. Many of their conversations centered around the differences in STEM opportunities between students in the U.S. and African countries. They also discussed the importance of STEM in economic development and eventually identified a lack of mentorship programs as a key problem. They decided to found Future African Scientist to close those gaps.

With support and encouragement from Walker, the pair kept in touch after the class and focused their mission to equipping university and high school students in Africa with early-stage mentorship and critical thinking skills that would enable them to conduct independent research projects.

In January 2022, they organized their first virtual bootcamp for students across Africa. The bootcamp featured virtual courses, lectures by leading African scientists, mentorship opportunities, and a capstone project that challenged students to apply their learnings.

“We didn’t want to just give them research skills, but also entrepreneurship skills and interpersonal skills to position them as scientific entrepreneurs,” Lubowa says.

After receiving positive feedback and learning more about the skills students needed, the founders broadened the structure of FAS.

Today, a similar bootcamp on foundational research skills serves as the first stage of FAS’s four-part Africa Science Research Academy. The second stage is a data-driven research project that exposes participants to working in a lab. The third stage teaches skills including entrepreneurship, leadership, financial literacy, and grant management. The final stage, the Africa Science Opportunity Network, is available to FAS graduates for life and is designed to connect participants with internships, jobs opportunities, and other research projects.

“What makes us different from most of the research training programs in Africa is that we are open to anyone who is curious,” Lubowa says. “Most of the programs on the continent target MDs who already practicing, or PhDs, which is a bit unfair for people who are curious, but they don’t have the right platform to channel that curiosity into meaningful experiences.”

To date, more than 100 students and young professionals have gone through FAS programming. The students hail from more than 30 universities and 15 countries. FAS has also partnered with 10 medical student associations that have helped it expand its network to more than 100,000 students across the continent. FAS is also in conversations with organizations like the African Microscope Initiative, which has offered to recruit FAS graduates for more specialized training in bioimaging, as well as African state governments to create upskilling programs that could serve as alternatives to MD and PhD programs.

“We see Africa transitioning from just being a beneficiary of the global scientific community to becoming a contributor,” Lubowa says. “That means we can help the U.S. and other Western countries solve their problems. The issue at the moment is getting people the skills they need and changing their mindset so they understand they can do great things, and that in the long run, they can not just generate knowledge, but also create enterprises that address some of these challenges within Africa and beyond.”

Meeting the needs of the continent

In 2022, a pair of students from the Association of Mbarara University Pharmaceutical Sciences in Uganda learned about the foundations of entrepreneurship through FAS’ programming. They are in the process of commercializing their research into mosquito repellants made from locally-sourced materials. That same year, an undergraduate Cameroonian alumna of FAS placed third in a national science competition despite going up against PhDs. His research was in early detection of pancreatic cancer.

“One of the aspirational goals of Future African Scientists is to cultivate a sustainable scientific ecosystem where beyond academia, there’s also a science industry in Africa,” Lubowa says.

Further down the line, FAS would like to open its own laboratories to broaden access to equipment, and FAS’s team has already spoken with companies that exchange second-hand medical and laboratory equipment to help improve scientific infrastructure at African institutes.

“Our long-term plans include establishing general-purpose, open laboratories where students across Africa can go and learn how to do practical science,” Lubowa says.

With all work, FAS seeks to empower Africans to become a global scientific force for good.

“We have a population of 1.2 billion people in Africa, but we only have 198 scientists per million people. The U.S. has more than 4,000 scientists per million people,” Lubowa says. “Africans also have the highest burden of disease, so there’s really a need for us to rethink how we have been training scientists, and it all goes back to these support systems. I really think we can change the scientific landscape in Africa.”


Scientists pin down the origins of the moon’s tenuous atmosphere

The barely-there lunar atmosphere is likely the product of meteorite impacts over billions of years, a new study finds.


While the moon lacks any breathable air, it does host a barely-there atmosphere. Since the 1980s, astronomers have observed a very thin layer of atoms bouncing over the moon’s surface. This delicate atmosphere — technically known as an “exosphere” — is likely a product of some kind of space weathering. But exactly what those processes might be has been difficult to pin down with any certainty.

Now, scientists at MIT and the University of Chicago say they have identified the main process that formed the moon’s atmosphere and continues to sustain it today. In a study appearing today in Science Advances, the team reports that the lunar atmosphere is primarily a product of “impact vaporization.”

In their study, the researchers analyzed samples of lunar soil collected by astronauts during NASA’s Apollo missions. Their analysis suggests that over the moon’s 4.5-billion-year history its surface has been continuously bombarded, first by massive meteorites, then more recently, by smaller, dust-sized “micrometeoroids.” These constant impacts have kicked up the lunar soil, vaporizing certain atoms on contact and lofting the particles into the air. Some atoms are ejected into space, while others remain suspended over the moon, forming a tenuous atmosphere that is constantly replenished as meteorites continue to pelt the surface.

The researchers found that impact vaporization is the main process by which the moon has generated and sustained its extremely thin atmosphere over billions of years.

“We give a definitive answer that meteorite impact vaporization is the dominant process that creates the lunar atmosphere,” says the study’s lead author, Nicole Nie, an assistant professor in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “The moon is close to 4.5 billion years old, and through that time the surface has been continuously bombarded by meteorites. We show that eventually, a thin atmosphere reaches a steady state because it’s being continuously replenished by small impacts all over the moon.”

Nie’s co-authors are Nicolas Dauphas, Zhe Zhang, and Timo Hopp at the University of Chicago, and Menelaos Sarantos at NASA Goddard Space Flight Center.

Weathering’s roles

In 2013, NASA sent an orbiter around the moon to do some detailed atmospheric reconnaissance. The Lunar Atmosphere and Dust Environment Explorer (LADEE, pronounced “laddie”) was tasked with remotely gathering information about the moon’s thin atmosphere, surface conditions, and any environmental influences on the lunar dust.

LADEE’s mission was designed to determine the origins of the moon’s atmosphere. Scientists hoped that the probe’s remote measurements of soil and atmospheric composition might correlate with certain space weathering processes that could then explain how the moon’s atmosphere came to be.

Researchers suspect that two space weathering processes play a role in shaping the lunar atmosphere: impact vaporization and “ion sputtering” — a phenomenon involving solar wind, which carries energetic charged particles from the sun through space. When these particles hit the moon’s surface, they can transfer their energy to the atoms in the soil and send those atoms sputtering and flying into the air. 

“Based on LADEE’s data, it seemed both processes are playing a role,” Nie says. “For instance, it showed that during meteorite showers, you see more atoms in the atmosphere, meaning impacts have an effect. But it also showed that when the moon is shielded from the sun, such as during an eclipse, there are also changes in the atmosphere’s atoms, meaning the sun also has an impact. So, the results were not clear or quantitative.”

Answers in the soil

To more precisely pin down the lunar atmosphere’s origins, Nie looked to samples of lunar soil collected by astronauts throughout NASA’s Apollo missions. She and her colleagues at the University of Chicago acquired 10 samples of lunar soil, each measuring about 100 milligrams — a tiny amount that she estimates would fit into a single raindrop.

Nie sought to first isolate two elements from each sample: potassium and rubidium. Both elements are “volatile,” meaning that they are easily vaporized by impacts and ion sputtering. Each element exists in the form of several isotopes. An isotope is a variation of the same element, that consists of the same number of protons but a slightly different number of neutrons. For instance, potassium can exist as one of three isotopes, each one having one more neutron, and there being slightly heavier than the last. Similarly, there are two isotopes of rubidium.

The team reasoned that if the moon’s atmosphere consists of atoms that have been vaporized and suspended in the air, lighter isotopes of those atoms should be more easily lofted, while heavier isotopes would be more likely to settle back in the soil. Furthermore, scientists predict that impact vaporization, and ion sputtering, should result in very different isotopic proportions in the soil. The specific ratio of light to heavy isotopes that remain in the soil, for both potassium and rubidium, should then reveal the main process contributing to the lunar atmosphere’s origins.

With all that in mind, Nie analyzed the Apollo samples by first crushing the soils into a fine powder, then dissolving the powders in acids to purify and isolate solutions containing potassium and rubidium. She then passed these solutions through a mass spectrometer to measure the various isotopes of both potassium and rubidium in each sample.

In the end, the team found that the soils contained mostly heavy isotopes of both potassium and rubidium. The researchers were able to quantify the ratio of heavy to light isotopes of both potassium and rubidium, and by comparing both elements, they found that impact vaporization was most likely the dominant process by which atoms are vaporized and lofted to form the moon’s atmosphere.

“With impact vaporization, most of the atoms would stay in the lunar atmosphere, whereas with ion sputtering, a lot of atoms would be ejected into space,” Nie says. “From our study, we now can quantify the role of both processes, to say that the relative contribution of impact vaporization versus ion sputtering is about 70:30 or larger.” In other words, 70 percent or more of the moon’s atmosphere is a product of meteorite impacts, whereas the remaining 30 percent is a consequence of the solar wind.

“The discovery of such a subtle effect is remarkable, thanks to the innovative idea of combining potassium and rubidium isotope measurements along with careful, quantitative modeling,” says Justin Hu, a postdoc who studies lunar soils at Cambridge University, who was not involved in the study. “This discovery goes beyond understanding the moon’s history, as such processes could occur and might be more significant on other moons and asteroids, which are the focus of many planned return missions.”

“Without these Apollo samples, we would not be able to get precise data and measure quantitatively to understand things in more detail,” Nie says. “It’s important for us to bring samples back from the moon and other planetary bodies, so we can draw clearer pictures of the solar system’s formation and evolution.”

This work was supported, in part, by NASA and the National Science Foundation.


Scientists find a human “fingerprint” in the upper troposphere’s increasing ozone

Knowing where to look for this signal will help researchers identify specific sources of the potent greenhouse gas.


Ozone can be an agent of good or harm, depending on where you find it in the atmosphere. Way up in the stratosphere, the colorless gas shields the Earth from the sun’s harsh ultraviolet rays. But closer to the ground, ozone is a harmful air pollutant that can trigger chronic health problems including chest pain, difficulty breathing, and impaired lung function.

And somewhere in between, in the upper troposphere — the layer of the atmosphere just below the stratosphere, where most aircraft cruise — ozone contributes to warming the planet as a potent greenhouse gas.

There are signs that ozone is continuing to rise in the upper troposphere despite efforts to reduce its sources at the surface in many nations. Now, MIT scientists confirm that much of ozone’s increase in the upper troposphere is likely due to humans.

In a paper appearing today in the journal Environmental Science and Technology, the team reports that they detected a clear signal of human influence on upper tropospheric ozone trends in a 17-year satellite record starting in 2005.

“We confirm that there’s a clear and increasing trend in upper tropospheric ozone in the northern midlatitudes due to human beings rather than climate noise,” says study lead author Xinyuan Yu, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS).

“Now we can do more detective work and try to understand what specific human activities are leading to this ozone trend,” adds co-author Arlene Fiore, the Peter H. Stone and Paola Malanotte Stone Professor in Earth, Atmospheric and Planetary Sciences.

The study’s MIT authors include Sebastian Eastham and Qindan Zhu, along with Benjamin Santer at the University of California at Los Angeles, Gustavo Correa of Columbia University, Jean-François Lamarque at the National Center for Atmospheric Research, and Jerald Zimeke at NASA Goddard Space Flight Center.

Ozone’s tangled web

Understanding ozone’s causes and influences is a challenging exercise. Ozone is not emitted directly, but instead is a product of “precursors” — starting ingredients, such as nitrogen oxides and volatile organic compounds (VOCs), that react in the presence of sunlight to form ozone. These precursors are generated from vehicle exhaust, power plants, chemical solvents, industrial processes, aircraft emissions, and other human-induced activities.

Whether and how long ozone lingers in the atmosphere depends on a tangle of variables, including the type and extent of human activities in a given area, as well as natural climate variability. For instance, a strong El Niño year could nudge the atmosphere’s circulation in a way that affects ozone’s concentrations, regardless of how much ozone humans are contributing to the atmosphere that year.

Disentangling the human- versus climate-driven causes of ozone trend, particularly in the upper troposphere, is especially tricky. Complicating matters is the fact that in the lower troposphere — the lowest layer of the atmosphere, closest to ground level — ozone has stopped rising, and has even fallen in some regions at northern midlatitudes in the last few decades. This decrease in lower tropospheric ozone is mainly a result of efforts in North America and Europe to reduce industrial sources of air pollution.

Near the surface, ozone has been observed to decrease in some regions, and its variations are more closely linked to human emissions,” Yu notes. “In the upper troposphere, the ozone trends are less well-monitored but seem to decouple with those near the surface, and ozone is more easily influenced by climate variability. So, we don’t know whether and how much of that increase in observed ozone in the upper troposphere is attributed to humans.”

A human signal amid climate noise

Yu and Fiore wondered whether a human “fingerprint” in ozone levels, caused directly by human activities, could be strong enough to be detectable in satellite observations in the upper troposphere. To see such a signal, the researchers would first have to know what to look for.

For this, they looked to simulations of the Earth’s climate and atmospheric chemistry. Following approaches developed in climate science, they reasoned that if they could simulate a number of possible climate variations in recent decades, all with identical human-derived sources of ozone precursor emissions, but each starting with a slightly different climate condition, then any differences among these scenarios should be due to climate noise. By inference, any common signal that emerged when averaging over the simulated scenarios should be due to human-driven causes. Such a signal, then, would be a “fingerprint” revealing human-caused ozone, which the team could look for in actual satellite observations.

With this strategy in mind, the team ran simulations using a state-of-the-art chemistry climate model. They ran multiple climate scenarios, each starting from the year 1950 and running through 2014.

From their simulations, the team saw a clear and common signal across scenarios, which they identified as a human fingerprint. They then looked to tropospheric ozone products derived from multiple instruments aboard NASA’s Aura satellite.

“Quite honestly, I thought the satellite data were just going to be too noisy,” Fiore admits. “I didn’t expect that the pattern would be robust enough.”

But the satellite observations they used gave them a good enough shot. The team looked through the upper tropospheric ozone data derived from the satellite products, from the years 2005 to 2021, and found that, indeed, they could see the signal of human-caused ozone that their simulations predicted. The signal is especially pronounced over Asia, where industrial activity has risen significantly in recent decades and where abundant sunlight and frequent weather events loft pollution, including ozone and its precursors, to the upper troposphere.

Yu and Fiore are now looking to identify the specific human activities that are leading to ozone’s increase in the upper troposphere.

“Where is this increasing trend coming from? Is it the near-surface emissions from combusting fossil fuels in vehicle engines and power plants? Is it the aircraft that are flying in the upper troposphere? Is it the influence of wildland fires? Or some combination of all of the above?” Fiore says. “Being able to separate human-caused impacts from natural climate variations can help to inform strategies to address climate change and air pollution.”

This research was funded, in part, by NASA.