Science news from the MIT - Massachusetts Institute of Technology University

Here you find the recent daily science news of the the MIT - Massachusetts Institute of Technology University

MIT News - School of Science
MIT news feed about: School of Science
Scientists get a first look at the innermost region of a white dwarf system

X-ray observations reveal surprising features of the dying star’s most energetic environment.


Some 200 light years from Earth, the core of a dead star is circling a larger star in a macabre cosmic dance. The dead star is a type of white dwarf that exerts a powerful magnetic field as it pulls material from the larger star into a swirling, accreting disk. The spiraling pair is what’s known as an “intermediate polar” — a type of star system that gives off a complex pattern of intense radiation, including X-rays, as gas from the larger star falls onto the other one.

Now, MIT astronomers have used an X-ray telescope in space to identify key features in the system’s innermost region — an extremely energetic environment that has been inaccessible to most telescopes until now. In an open-access study published in the Astrophysical Journal, the team reports using NASA’s Imaging X-ray Polarimetry Explorer (IXPE) to observe the intermediate polar, known as EX Hydrae.

The team found a surprisingly high degree of X-ray polarization, which describes the direction of an X-ray wave’s electric field, as well as an unexpected direction of polarization in the X-rays coming from EX Hydrae. From these measurements, the researchers traced the X-rays back to their source in the system’s innermost region, close to the surface of the white dwarf.

What’s more, they determined that the system’s X-rays were emitted from a column of white-hot material that the white dwarf was pulling in from its companion star. They estimate that this column is about 2,000 miles high — about half the radius of the white dwarf itself and much taller than what physicists had predicted for such a system. They also determined that the X-rays are reflected off the white dwarf’s surface before scattering into space — an effect that physicists suspected but hadn’t confirmed until now.

The team’s results demonstrate that X-ray polarimetry can be an effective way to study extreme stellar environments such as the most energetic regions of an accreting white dwarf.

“We showed that X-ray polarimetry can be used to make detailed measurements of the white dwarf's accretion geometry,” says Sean Gunderson, a postdoc in MIT’s Kavli Institute for Astrophysics and Space Research, who is the study’s lead author. “It opens the window into the possibility of making similar measurements of other types of accreting white dwarfs that also have never had predicted X-ray polarization signals.”

 

Gunderson’s MIT Kavli co-authors include graduate student Swati Ravi and research scientists Herman Marshall and David Huenemoerder, along with Dustin Swarm of the University of Iowa, Richard Ignace of East Tennessee State University, Yael Nazé of the University of Liège, and Pragati Pradhan of Embry Riddle Aeronautical University.

A high-energy fountain

All forms of light, including X-rays, are influenced by electric and magnetic fields. Light travels in waves that wiggle, or oscillate, at right angles to the direction in which the light is traveling. External electric and magnetic fields can pull these oscillations in random directions. But when light interacts and bounces off a surface, it can become polarized, meaning that its vibrations tighten up in one direction. Polarized light, then, can be a way for scientists to trace the source of the light and discern some details about the source’s geometry.

The IXPE space observatory is NASA’s first mission designed to study polarized X-rays that are emitted by extreme astrophysical objects. The spacecraft, which launched in 2021, orbits the Earth and records these polarized X-rays. Since launch, it has primarily focused on supernovae, black holes, and neutron stars.

The new MIT study is the first to use IXPE to measure polarized X-rays from an intermediate polar — a smaller system compared to black holes and supernovas, that nevertheless is known to be a strong emitter of X-rays.

“We started talking about how much polarization would be useful to get an idea of what’s happening in these types of systems, which most telescopes see as just a dot in their field of view,” Marshall says.

An intermediate polar gets its name from the strength of the central white dwarf’s magnetic field. When this field is strong, the material from the companion star is directly pulled toward the white dwarf’s magnetic poles. When the field is very weak, the stellar material instead swirls around the dwarf in an accretion disk that eventually deposits matter directly onto the dwarf’s surface.

In the case of an intermediate polar, physicists predict that material should fall in a complex sort of in-between pattern, forming an accretion disk that also gets pulled toward the white dwarf’s poles. The magnetic field should lift the disk of incoming material far upward, like a high-energy fountain, before the stellar debris falls toward the white dwarf’s magnetic poles, at speeds of millions of miles per hour, in what astronomers refer to as an “accretion curtain.” Physicists suspect that this falling material should run up against previously lifted material that is still falling toward the poles, creating a sort of traffic jam of gas. This pile-up of matter forms a column of colliding gas that is tens of millions of degrees Fahrenheit and should emit high-energy X-rays.

An innermost picture

By measuring any polarized X-rays emitted by EX Hydrae, the team aimed to test the picture of intermediate polars that physicists had hypothesized. In January 2025, IXPE took a total of about 600,000 seconds, or about seven days’ worth, of X-ray measurements from the system.

“With every X-ray that comes in from the source, you can measure the polarization direction,” Marshall explains. “You collect a lot of these, and they’re all at different angles and directions which you can average to get a preferred degree and direction of the polarization.”

Their measurements revealed an 8 percent polarization degree that was much higher than what scientists had predicted according to some theoretical models. From there, the researchers were able to confirm that the X-rays were indeed coming from the system’s column, and that this column is about 2,000 miles high.

“If you were able to stand somewhat close to the white dwarf’s pole, you would see a column of gas stretching 2,000 miles into the sky, and then fanning outward,” Gunderson says.

The team also measured the direction of EX Hydrae’s X-ray polarization, which they determined to be perpendicular to the white dwarf’s column of incoming gas. This was a sign that the X-rays emitted by the column were then bouncing off the white dwarf’s surface before traveling into space, and eventually into IXPE’s telescopes.

“The thing that’s helpful about X-ray polarization is that it’s giving you a picture of the innermost, most energetic portion of this entire system,” Ravi says. “When we look through other telescopes, we don’t see any of this detail.”

The team plans to apply X-ray polarization to study other accreting white dwarf systems, which could help scientists get a grasp on much larger cosmic phenomena.

“There comes a point where so much material is falling onto the white dwarf from a companion star that the white dwarf can’t hold it anymore, the whole thing collapses and produces a type of supernova that’s observable throughout the universe, which can be used to figure out the size of the universe,” Marshall offers. “So understanding these white dwarf systems helps scientists understand the sources of those supernovae, and tells you about the ecology of the galaxy.”

This research was supported, in part, by NASA.


The cost of thinking

MIT neuroscientists find a surprising parallel in the ways humans and new AI models solve complex problems.


Large language models (LLMs) like ChatGPT can write an essay or plan a menu almost instantly. But until recently, it was also easy to stump them. The models, which rely on language patterns to respond to users’ queries, often failed at math problems and were not good at complex reasoning. Suddenly, however, they’ve gotten a lot better at these things.

A new generation of LLMs known as reasoning models are being trained to solve complex problems. Like humans, they need some time to think through problems like these — and remarkably, scientists at MIT’s McGovern Institute for Brain Research have found that the kinds of problems that require the most processing from reasoning models are the very same problems that people need take their time with. In other words, they report today in the journal PNAS, the “cost of thinking” for a reasoning model is similar to the cost of thinking for a human.

The researchers, who were led by Evelina Fedorenko, an associate professor of brain and cognitive sciences and an investigator at the McGovern Institute, conclude that in at least one important way, reasoning models have a human-like approach to thinking. That, they note, is not by design. “People who build these models don’t care if they do it like humans. They just want a system that will robustly perform under all sorts of conditions and produce correct responses,” Fedorenko says. “The fact that there’s some convergence is really quite striking.”

Reasoning models

Like many forms of artificial intelligence, the new reasoning models are artificial neural networks: computational tools that learn how to process information when they are given data and a problem to solve. Artificial neural networks have been very successful at many of the tasks that the brain’s own neural networks do well — and in some cases, neuroscientists have discovered that those that perform best do share certain aspects of information processing in the brain. Still, some scientists argued that artificial intelligence was not ready to take on more sophisticated aspects of human intelligence.

“Up until recently, I was among the people saying, ‘These models are really good at things like perception and language, but it’s still going to be a long ways off until we have neural network models that can do reasoning,” Fedorenko says. “Then these large reasoning models emerged and they seem to do much better at a lot of these thinking tasks, like solving math problems and writing pieces of computer code.”

Andrea Gregor de Varda, a K. Lisa Yang ICoN Center Fellow and a postdoc in Fedorenko’s lab, explains that reasoning models work out problems step by step. “At some point, people realized that models needed to have more space to perform the actual computations that are needed to solve complex problems,” he says. “The performance started becoming way, way stronger if you let the models break down the problems into parts.”

To encourage models to work through complex problems in steps that lead to correct solutions, engineers can use reinforcement learning. During their training, the models are rewarded for correct answers and penalized for wrong ones. “The models explore the problem space themselves,” de Varda says. “The actions that lead to positive rewards are reinforced, so that they produce correct solutions more often.”

Models trained in this way are much more likely than their predecessors to arrive at the same answers a human would when they are given a reasoning task. Their stepwise problem-solving does mean reasoning models can take a bit longer to find an answer than the LLMs that came before — but since they’re getting right answers where the previous models would have failed, their responses are worth the wait.

The models’ need to take some time to work through complex problems already hints at a parallel to human thinking: if you demand that a person solve a hard problem instantaneously, they’d probably fail, too. De Varda wanted to examine this relationship more systematically. So he gave reasoning models and human volunteers the same set of problems, and tracked not just whether they got the answers right, but also how much time or effort it took them to get there.

Time versus tokens

This meant measuring how long it took people to respond to each question, down to the millisecond. For the models, Varda used a different metric. It didn’t make sense to measure processing time, since this is more dependent on computer hardware than the effort the model puts into solving a problem. So instead, he tracked tokens, which are part of a model’s internal chain of thought. “They produce tokens that are not meant for the user to see and work on, but just to have some track of the internal computation that they’re doing,” de Varda explains. “It’s as if they were talking to themselves.”

Both humans and reasoning models were asked to solve seven different types of problems, like numeric arithmetic and intuitive reasoning. For each problem class, they were given many problems. The harder a given problem was, the longer it took people to solve it — and the longer it took people to solve a problem, the more tokens a reasoning model generated as it came to its own solution.

Likewise, the classes of problems that humans took longest to solve were the same classes of problems that required the most tokens for the models: arithmetic problems were the least demanding, whereas a group of problems called the “ARC challenge,” where pairs of colored grids represent a transformation that must be inferred and then applied to a new object, were the most costly for both people and models.

De Varda and Fedorenko say the striking match in the costs of thinking demonstrates one way in which reasoning models are thinking like humans. That doesn’t mean the models are recreating human intelligence, though. The researchers still want to know whether the models use similar representations of information to the human brain, and how those representations are transformed into solutions to problems. They’re also curious whether the models will be able to handle problems that require world knowledge that is not spelled out in the texts that are used for model training.

The researchers point out that even though reasoning models generate internal monologues as they solve problems, they are not necessarily using language to think. “If you look at the output that these models produce while reasoning, it often contains errors or some nonsensical bits, even if the model ultimately arrives at a correct answer. So the actual internal computations likely take place in an abstract, non-linguistic representation space, similar to how humans don’t use language to think,” he says.


Symposium examines the neural circuits that keep us alive and well

Seven speakers from around the country convened at MIT to describe some of the latest research on the neural mechanisms that we need to survive.


Taking an audience of hundreds on a tour around the body, seven speakers at The Picower Institute for Learning and Memory’s symposium “Circuits of Survival and Homeostasis” Oct. 21 shared their advanced and novel research about some of the nervous system’s most evolutionarily ancient functions.

Introducing the symposium that she arranged with a picture of a man at a campfire on a frigid day, Sara Prescott, assistant professor in the Picower Institute and MIT’s departments of Biology and Brain and Cognitive Sciences, pointed out that the brain and the body cooperate constantly just to keep us going, and that when the systems they maintain fail, the consequence is disease.

“[This man] is tightly regulating his blood pressure, glucose levels, his energy expenditure, inflammation and breathing rate, and he’s doing this in the face of a fluctuating external environment,” Prescott said. “Behind each of these processes there are networks of neurons that are working quietly in the background to maintain internal stability. And this is, of course, the brain’s oldest job.”

Indeed, although the discoveries they shared about the underlying neuroscience were new, the speakers each described experiences that are as timeless as they are familiar: the beating of the heart, the transition from hunger to satiety, and the healing of cuts on our skin.

Feeling warm and full

Li Ye, a scientist at Scripps Research, picked right up on the example of coping with the cold. Mammals need to maintain a consistent internal body temperature, and so they will increase metabolism in the cold and then, as energy supplies dwindle, seek out more food. His lab’s 2023 study identified the circuit, centered in the Xiphoid nucleus of the brain’s thalamus, that regulates this behavior by sensing prolonged cold exposure and energy consumption. Ye described other feeding mechanisms his lab is studying as well, including searching out the circuitry that regulates how long an animal will feed at a time. For instance, if you’re worried about predators finding you, it’s a bad idea to linger for a leisurely lunch.

Physiologist Zachary Knight of the University of California at San Francisco also studies feeding and drinking behaviors. In particular, his lab asks how the brain knows when it’s time to stop. The conventional wisdom is that all that’s needed is a feeling of fullness coming from the gut, but his research shows there is more to the story. A 2023 study from his lab found a population of neurons in the caudal nucleus of the solitary tract in the brain stem that receive signals about ingestion and taste from the mouth, and that send that “stop eating” signal. They also found a separate neural population in the brain stem that indeed receives fullness signals from the gut, and teaches the brain over time how much food leads to satisfaction. Both neuron types work together to regulate the pace of eating. His lab has continued to study how brain stem circuits regulate feeding using these multiple inputs.

Energy balance depends not only on how many calories come in, but also on how much energy is spent. When food is truly scarce, many animals will engage in a state of radically lowered metabolism called torpor (like hibernation), where body temperature plummets. The brain circuits that exert control over body temperature are another area of active research. In his talk, Harvard University neurologist Clifford Saper described years of research in which his lab found neurons in the median preoptic nucleus that dictate this metabolic state. Recently, his lab demonstrated that the same neurons that regulate torpor also regulate fever during sickness. When the neurons are active, body temperature drops. When they are inhibited, fever ensues. Thus, the same neurons act as a two-way switch for body temperature in response to different threatening conditions.

Sickness, injury, and stress

As the idea of fever suggests, the body also has evolved circuits (that scientists are only now dissecting) to deal with sickness and injury.

Washington University neuroscientist Qin Liu described her research into the circuits governing coughing and sneezing, which, on one hand, can clear the upper airways of pathogens and obstructions but, on the other hand, can spread those pathogens to others in the community. She described her lab’s 2024 study in which her team pinpointed a population of neurons in the nasal passages that mediate sneezing and a different population of sensory neurons in the trachea that produce coughing. Identifying the specific cells and their unique characteristics makes them potentially viable drug targets.

While Liu tackled sickness, Harvard stem cell biologist Ya-Chieh Hsu discussed how neurons can reshape the body’s tissues during stress and injury, specifically the hair and skin. While it is common lore that stress can make your hair gray and fall out, Hsu’s lab has shown the actual physiological mechanisms that make it so. In 2020 her team showed that bursts of noradrenaline from the hyperactivation of nerves in the sympathetic nervous system kills the melanocyte stem cells that give hair its color. She described newer research indicating a similar mechanism may also make hair fall out by killing off cells at the base of hair follicles, releasing cellular debris and triggering auto-immunity. Her lab has also looked at how the nervous system influences skin healing after injury. For instance, while our skin may appear to heal after a cut because it closes up, many skin cell types actually don’t rebound (unless you’re still an embryo). By looking at the difference between embryos and post-birth mice, Hsu’s lab has traced the neural mechanisms that prevent fuller healing, identifying a role for cells called fibroblasts and the nervous system.

Continuing on the theme of stress, Caltech biologist Yuki Oka discussed a broad-scale project in his lab to develop a molecular and cellular atlas of the sympathetic nervous system, which innervates much of the body and famously produces its “fight or flight” responses. In work partly published last year, their journey touched on cells and circuits involved in functions ranging from salivation to secreting bile. Oka and co-authors made the case for the need to study the system more in a review paper earlier this year.

A new model to study human biology

In their search for the best ways to understand the circuits that govern survival and homeostasis, researchers often use rodents because they are genetically tractable, easy to house, and reproduce quickly, but Stanford University biochemist Mark Krasnow has worked to develop a new model with many of those same traits but a closer genetic relationship to humans: the mouse lemur. In his talk, he described that work (which includes extensive field research in Madagascar) and focused on insights the mouse lemurs have helped him make into heart arrhythmias. After studying the genes and health of hundreds of mouse lemurs, his lab identified a family with “sick sinus syndrome,” an arrhythmia also seen in humans. In a preprint study, his lab describes the specific molecular pathways at fault in disrupting the heart’s natural pace making.

By sharing some of the latest research into how the brain and body work to stay healthy, the symposium’s speakers highlighted the most current thinking about the nervous system’s most primal purposes.


Quantum modeling for breakthroughs in materials science and sustainable energy

Quantum chemist Ernest Opoku is working on computational methods to study how electrons behave as a School of Science Dean’s Postdoctoral Fellow.


Ernest Opoku knew he wanted to become a scientist when he was a little boy. But his school in Dadease, a small town in Ghana, offered no elective science courses — so Opoku created one for himself.

Even though they had neither a dedicated science classroom nor a lab, Opoku convinced his principal to bring in someone to teach him and five other friends he had convinced to join him. With just a chalkboard and some imagination, they learned about chemical interactions through the formulas and diagrams they drew together.

“I grew up in a town where it was difficult to find a scientist,” he says.

Today, Opoku has become one himself, recently earning a PhD in quantum chemistry from Auburn University. This year, he joins MIT as a part of the School of Science Dean’s Postdoctoral Fellowship program. Working with the Van Voorhis Group at the Department of Chemistry, Opoku’s goal is to advance computational methods to study how electrons behave — a fundamental research that underlies applications ranging from materials science to drug discovery.

“As a boy who wanted to satisfy my own curiosities at a young age, in addition to the fact that my parents had minimal formal education,” Opoku says, “I knew that the only way I would be able to accomplish my goal was to work hard.”

In pursuit of knowledge

When Opoku was 8 years old, he began independently learning English at school. He would come back with homework, but his parents were unable to help him, as neither of them could read or write in English. Frustrated, his mother asked an older student to help tutor her son.

Every day, the boys would meet at 6 o’clock. With no electricity at either of their homes, they practiced new vocabulary and pronunciations together by a kerosene lamp.

As he entered junior high school, Opoku’s fascination with nature grew.

“I realized that chemistry was the central science that really offered the insight that I wanted to really understand Creation from the smallest level,” he says.

He studied diligently and was able to get into one of Ghana’s top high schools — but his parents couldn’t afford the tuition. He therefore enrolled in Dadease Agric Senior High School in his hometown. By growing tomatoes and maize, he saved up enough money to support his education.

In 2012, he got into Kwame Nkrumah University of Science and Technology (KNUST), a first-ranking university in Ghana and the West Africa region. There, he was introduced to computational chemistry. Unlike many other branches of science, the field required only a laptop and the internet to study chemical reactions.

“Anything that comes to mind, anytime I can grab my computer and I’ll start exploring my curiosity. I don’t have to wait to go to the laboratory in order to interrogate nature,” he says.

Opoku worked from early morning to late night. None of it felt like work, though, thanks to his supervisor, the late quantum chemist Richard Tia, who was an associate professor of chemistry at KNUST.

“Every single day was a fun day,” he recalls of his time working with Tia. “I was being asked to do the things that I myself wanted to know, to satisfy my own curiosity, and by doing that I’ll be given a degree.”

In 2020, Opoku’s curiosity brought him even further, this time overseas to Auburn University in Alabama for his PhD. Under the guidance of his advisor, Professor J. V. Ortiz, Opoku contributed to the development of new computational methods to simulate how electrons bind to or detach from molecules, a process known as electron propagation.

What is new about Opoku’s approach is that it does not rely on any adjustable or empirical parameters. Unlike some earlier computational methods that require tuning to match experimental results, his technique uses advanced mathematical formulations to directly account for first principles of electron interactions. This makes the method more accurate — closely resembling results from lab experiments — while using less computational power.

By streamlining the calculations and eliminating guesswork, Opoku’s work marks a major step toward faster, more trustworthy quantum simulations across a wide range of molecules, including those never studied before — laying the groundwork for breakthroughs in many areas such as materials science and sustainable energy.

For his postdoctoral research at MIT, Opoku aims to advance electron propagator methods to address larger and more complex molecules and materials by integrating quantum computing, machine learning, and bootstrap embedding — a technique that simplifies quantum chemistry calculations by dividing large molecules into smaller, overlapping fragments. He is collaborating with Troy Van Voorhis, the Haslam and Dewey Professor of Chemistry, whose expertise in these areas can help make Opoku’s advanced simulations more computationally efficient and scalable.

“His approach is different from any of the ways that we've pursued in the group in the past,” Van Voorhis says.

Passing along the opportunity to learn

Opoku thanks previous mentors who helped him overcome the “intellectual overhead required to make contributions to the field,” and believes Van Voorhis will offer the same kind of support.

In 2021, Opoku joined the National Organization for the Professional Advancement of Black Chemists and Chemical Engineers (NOBCChE) to gain mentorship, networking, and career development opportunities within a supportive community. He later led the Auburn University chapter as president, helping coordinate k-12 outreach to inspire the next generation of scientists, engineers, and innovators.

“Opoku’s mentorship goes above and beyond what would be typical at his career stage,” says Van Voorhis. “One reason is his ability to communicate science to people, and not just the concepts of science, but also the process of science."

Back home, Opoku founded the Nesvard Institute of Molecular Sciences to support African students to develop not only skills for graduate school and professional careers, but also a sense of confidence and cultural identity. Through the nonprofit, he has mentored 29 students so far, passing along the opportunity for them to follow their curiosity and help others do the same.

“There are many areas of science and engineering to which Africans have made significant contributions, but these contributions are often not recognized, celebrated, or documented,” Opoku says.

He adds: “We have a duty to change the narrative.” 


The science of consciousness

Through the MIT Consciousness Club, professors Matthias Michel and Earl Miller are exploring how neurological activity gives rise to human experience.


Humans know they exist, but how does “knowing” work? Despite all that’s been learned about brain function and the bodily processes it governs, we still don't understand where the subjective experiences associated with brain functions originate. 

A new interdisciplinary project seeks to find answers to these kinds of big questions around consciousness, a fundamental yet elusive phenomenon.

The MIT Consciousness Club is co-led by philosopher Matthias Michel, the Old Dominion Career Development Professor in the Department of Linguistics and Philosophy, and Earl Miller, the Picower Professor of Neuroscience in the Department of Brain and Cognitive Sciences.

Funded by a grant from the MIT Human Insight Collaborative’s (MITHIC) SHASS+ Connectivity Fund, the MIT Consciousness Club aims to build a bridge between philosophy and cognitive (neuro)science, while also engaging the Boston area’s academic community to advance consciousness research.

“It’s possible to study this scientifically,” says Michel. “MIT positioning itself as a leader in the field would change everything.”

“Matthias takes a science-based approach to the work” Miller adds. “A coherent, fact-based, research-supported understanding of and approach to consciousness can have a massive impact on our approach to public health.”

Working together, they hope to increase access to a diverse network of researchers, improve their understanding of how consciousness works, and develop tools to measure consciousness objectively.

The MIT Consciousness Club plans to hold monthly events featuring expert talks and Q&A sessions collaborating on topics like the neural correlates of consciousness, unconscious perception, and consciousness in animals and AI systems.

“What can science tell us about brain function and consciousness?” Michel asks. “Why does neurological activity give rise to conscious experience, as opposed to nothing?”

“Cognition is your brain self-organizing,” Miller adds. “How does the brain organize itself to attain goals?” Unlike amoebae, Miller notes, humans both react to and act on the environment.

Michel’s research focuses on the philosophy of cognitive science, mind, and perception, with interests in the philosophy of measurement and philosophy of psychiatry. Most of his recent work focuses on methodological and foundational issues in the scientific study of consciousness. 

Miller studies the neural basis of memory and cognition. His areas of focus include the neural mechanisms of attention, learning, and memory needed for voluntary, goal-directed behavior, with a special focus on the brain’s prefrontal cortex. 

“I was engaged with how the mind works”

Before arriving at MIT in 2024, Michel’s academic and research interests led him to his work at the intersection of neuroscience and philosophy. “I was engaged with how the mind works,” he says. He describes a course of study focused on issues related to logic and reasoning and the ways the brain toggles between conscious and unconscious brain function. 

Following the completion of his doctoral and postdoctoral studies, he continued his investigation into the nature of consciousness. Work from Melvyn Goodale at Western University led to a light-bulb moment for him.

“According to Goodale, the brain operates with two visual systems — conscious and non-conscious — responsible for fine-grained motor commands,” he says. “Researchers discovered the way someone adjusts their grip, for example, is based on a non-conscious stream of vision.”

This discovery helped further Michel’s commitment to understanding consciousness’s function objectively. “How long does it take a person to become conscious of something?” he asks. “There is a lag between when a signal is presented and when we get to subjectively experience it.” Measuring that delay, and understanding the path from stimulus to signal processing and response, is a core facet of Michel’s investigation. Consciousness, he asserts, is for planning, not reacting.

Michel and Miller aren’t only interested in human brains. Improved understanding of animal and other living things’ consciousness are also under discussion. “How do you organize states of consciousness in nonhuman species?” Michel asks. Understanding how species interact with the world can help us understand it and them better. 

Making room for investigation and collaboration

One of the surprising discoveries both uncovered while shaping the idea that would become the MIT Consciousness Club is the size of the group interested in participating. “It’s larger than I thought,” Miller says. “We’ve established connections with colleagues at the Lincoln Laboratory and Northeastern University, all of whom are invested in studying consciousness.”

Both Michel and Miller believe researchers at MIT and elsewhere can benefit from the kind of collaboration MITHIC funding makes possible. “The goal is to create community,” Michel says, “while also improving the research area’s reputation.” 

“It’s possible to study consciousness scientifically because of its connection to other questions,” Miller adds.

The investigative avenues available when you can explore ideas for their own sake — like how consciousness functions, for example — can lead to exciting breakthroughs. “Imagine if consciousness research became a focus area, rather than a sideline, for people interested in its study,” Michel says. 

“You can’t study the complexities of executive [brain] function and not get to consciousness,” Miller continues. “Designing a system to effectively and accurately measure consciousness levels in the brain has a variety of potentially groundbreaking applications.”

Miller works with Emery Brown, the Edward Hood Taplin Professor of Medical Engineering and Computational Neuroscience at MIT and a practicing anesthesiologist at Massachusetts General Hospital for whom consciousness is a central concern.

“General anesthesia during surgical procedures is bad when you’re really young or really old,” Miller says. “Older people who need anesthesia can experience cognitive decline, which is why health-care providers are often reluctant to perform surgeries despite needing it.” A better understanding of the mechanisms that create consciousness can help improve pre- and post-surgical care delivery and outcomes. 

Researching consciousness can also yield substantial public health benefits, including more-efficient mental health treatment. “Mental health disorders affect high-level cognitive function,” Miller continues. “Anesthesia interacts with drugs used to treat mental health disorders, which can severely impact patient care.” Each of the researchers wants to understand how drug therapies actually mediate patient experiences.

Ultimately, the professors agree that improved access to consciousness studies will improve research rigor and help burnish the field’s reputation.


MIT Energy Initiative conference spotlights research priorities amidst a changing energy landscape

Industry leaders agree collaboration is key to advancing critical technologies.


“We’re here to talk about really substantive changes, and we want you to be a participant in that,” said Desirée Plata, the School of Engineering Distinguished Professor of Climate and Energy in MIT’s Department of Civil and Environmental Engineering, at Energizing@MIT: the MIT Energy Initiative’s (MITEI) Annual Research Conference that was held on Sept. 9-10.

Plata’s words resonated with the 150-plus participants from academia, industry, and government meeting in Cambridge for the conference, whose theme was “tackling emerging energy challenges.” Meeting such challenges and ultimately altering the trajectory of global climate outcomes requires partnerships, speakers agreed.

“We have to be humble and open,” said Giacomo Silvestri, chair of Eniverse Ventures at Eni, in a shared keynote address. “We cannot develop innovation just focusing on ourselves and our competencies … so we need to partner with startups, venture funds, universities like MIT and other public and private institutions.” 

Added his Eni colleague, Annalisa Muccioli, head of research and technology, “The energy transition is a race we can win only by combining mature solutions ready to deploy, together with emerging technologies that still require acceleration and risk management.”

Research targets

In a conference that showcased a suite of research priorities MITEI has identified as central to ensuring a low-carbon energy future, participants shared both promising discoveries and strategies for advancing proven technologies in the face of shifting political winds and policy uncertainties.

One panel focused on grid resiliency — a topic that has moved from the periphery to the center of energy discourse as climate-driven disruptions, cyber threats, and the integration of renewables challenge legacy systems. A dramatic case in point: the April 2025 outage in Spain and Portugal that left millions without power for eight to 15 hours. 

“I want to emphasize that this failure was about more than the power system,” said MITEI research scientist Pablo Duenas-Martinez. While he pinpointed technical problems with reactive power and voltage control behind the system collapse, Duenas-Martinez also called out a lack of transmission capacity with Central Europe and out-of-date operating procedures, and recommended better preparation and communication among transmission systems and utility operators.

“You can’t plan for every single eventuality, which means we need to broaden the portfolio of extreme events we prepare for,” noted Jennifer Pearce, vice president at energy company Avangrid. “We are making the system smarter, stronger, and more resilient to better protect from a wide range of threats such as storms, flooding, and extreme heat events.” Pearce noted that Avangrid’s commitment to deliver safe, reliable power to its customers necessitates “meticulous emergency planning procedures.”

The resiliency of the electric grid under greatly increased demand is an important motivation behind MITEI’s September 2025 launch of the Data Center Power Forum, which was also announced during the annual research conference. The forum will include research projects, webinars, and other content focused on energy supply and storage, grid design and management, infrastructure, and public and economic policy related to data centers. The forum’s members include MITEI companies that also participate in MIT’s Center for Environmental and Energy Policy Research (CEEPR).

Storage and transportation: Staggering challenges

Meeting climate goals to decarbonize the world by 2050 requires building around 300 terawatt-hours of storage, according to Asegun Henry, a professor in the MIT Department of Mechanical Engineering. “It’s an unbelievably enormous problem people have to wrap their minds around,” he said. Henry has been developing a high-temperature thermal energy storage system he has nicknamed “sun in a box.” His system uses liquid metal and graphite to hold electricity as heat and then convert it back to electricity, enabling storage anywhere from five to 500 hours.

“At the end of the day, storage provides a service, and the type of technology that you need is a function of the service that you value the most,” said Nestor Sepulveda, commercial lead for advanced energy investments and partnerships at Google. “I don't think there is one winner-takes-all type of market here.”

Another panel explored sustainable fuels that could help decarbonize hard-to-electrify sectors like aviation, shipping, and long-haul trucking. Randall Field, MITEI’s director of research, noted that sustainably produced drop-in fuels — fuels that are largely compatible with existing engines — “could eliminate potentially trillions of dollars of cost for fleet replacement and for infrastructure build-out, while also helping us to accelerate the rate of decarbonization of the transportation sectors."

Erik G. Birkerts is the chief growth officer of LanzaJet, which produces a drop-in, high-energy-density aviation fuel derived from agricultural residue and other waste carbon sources. “The key to driving broad sustainable aviation fuel adoption is solving both the supply-side challenge through more production and the demand-side hurdle by reducing costs,” he said.

“We think a good policy framework [for sustainable fuels] would be something that is technology-neutral, does not exclude any pathways to produce, is based on life cycle accounting practices, and on market mechanisms,” said Veronica L. Robertson, energy products technology portfolio manager at ExxonMobil.

MITEI plans a major expansion of its research on sustainable fuels, announcing a two-year study, “The future of fuels: Pathways to sustainable transportation,” starting in early 2026. According to Field, the study will analyze and assess biofuels and e-fuels.

Solutions from labs big and small

Global energy leaders offered glimpses of their research projects. A panel on carbon capture in power generation featured three takes on the topic: Devin Shaw, commercial director of decarbonization technologies at Shell, described post-combustion carbon capture in power plants using steam for heat recovery; Jan Marsh, a global program lead at Siemens Energy, discussed deploying novel materials to capture carbon dioxide directly from the air; and Jeffrey Goldmeer, senior director of technology strategy at GE Vernova, explained integrating carbon capture into gas-powered turbine systems.

During a panel on vehicle electrification, Brian Storey, vice president of energy and materials at the Toyota Research Institute, provided an overview of Toyota’s portfolio of projects for decarbonization, including solid-state batteries, flexible manufacturing lines, and grid-forming inverters to support EV charging infrastructure.

A session on MITEI seed fund projects revealed promising early-stage research inside MIT’s own labs. A new process for decarbonizing the production of ethylene was presented by Yogesh Surendranath, Donner Professor of Science in the MIT Department of Chemistry. Materials Science and Engineering assistant professor Aristide Gumyusenge also discussed the development of polymers essential for a new kind of sodium-ion battery.

Shepherding bold, new technologies like these from academic labs into the real world cannot succeed without ample support and deft management. A panel on paths to commercialization featured the work of Iwnetim Abate, Chipman Career Development Professor and assistant professor in the MIT Department of Materials Science and Engineering, who has spun out a company, Addis Energy, based on a novel geothermal process for harvesting clean hydrogen and ammonia from subsurface, iron-rich rocks. Among his funders: ARPA-E and MIT’s own The Engine Ventures.

The panel also highlighted the MIT Proto Ventures Program, an initiative to seize early-stage MIT ideas and unleash them as world-changing startups. “A mere 4.2 percent of all the patents that are actually prosecuted in the world are ever commercialized, which seems like a shocking number,” said Andrew Inglis, an entrepreneur working with Proto Ventures to translate geothermal discoveries into businesses. “Can’t we do this better? Let’s do this better!”

Geopolitical hazards

Throughout the conference, participants often voiced concern about the impacts of competition between the United States and China. Kelly Sims Gallagher, dean of the Fletcher School at Tufts University and an expert on China’s energy landscape, delivered the sobering news in her keynote address: “U.S. competitiveness in low-carbon technologies has eroded in nearly every category,” she said. “The Chinese are winning the clean tech race.”

China enjoys a 51 percent share in global wind turbine manufacture and 75 percent in solar modules. It also controls low-carbon supply chains that much of the world depends on. “China is getting so dominant that nobody can carve out a comparative advantage in anything,” said Gallagher. “China is just so big, and the scale is so huge that the Chinese can truly conquer markets and make it very hard for potential competitors to find a way in.”

And for the United States, the problem is “the seesaw of energy policy,” she says. “It’s incredibly difficult for the private sector to plan and to operate, given the lack of predictability and policy here.”

Nevertheless, Gallagher believes the United States still has a chance of at least regaining competitiveness, by setting up a stable, bipartisan energy policy, rebuilding domestic manufacturing and supply chains; providing consistent fiscal incentives; attracting and retaining global talent; and fostering international collaboration.

The conference shone a light on one such collaboration: a China-U.S. joint venture to manufacture lithium iron phosphate batteries for commercial vehicles in the United States. The venture brings together Eve Energy, a Chinese battery technology and manufacturing company; Daimler, a global commercial vehicle manufacturer; PACCAR Inc., a U.S.-based truck manufacturer; and Accelera, the zero-emissions business of Cummins Inc. “Manufacturing batteries in the U.S. makes the supply chain more robust and reduces geopolitical risks,” said Mike Gerty, of PACCAR.

While she acknowledged the obstacles confronting her colleagues in the room, Plata nevertheless concluded her remarks as a panel moderator with some optimism: “I hope you all leave this conference and look back on it in the future, saying I was in the room when they actually solved some of the challenges standing between now and the future that we all wish to manifest.”


Introducing the MIT-GE Vernova Climate and Energy Alliance

Five-year collaboration between MIT and GE Vernova aims to accelerate the energy transition and scale new innovations.


MIT and GE Vernova launched the MIT-GE Vernova Energy and Climate Alliance on Sept. 15, a collaboration to advance research and education focused on accelerating the global energy transition.

Through the alliance — an industry-academia initiative conceived by MIT Provost Anantha Chandrakasan and GE Vernova CEO Scott Strazik — GE Vernova has committed $50 million over five years in the form of sponsored research projects and philanthropic funding for research, graduate student fellowships, internships, and experiential learning, as well as professional development programs for GE Vernova leaders.

“MIT has a long history of impactful collaborations with industry, and the collaboration between MIT and GE Vernova is a shining example of that legacy,” said Chandrakasan in opening remarks at a launch event. “Together, we are working on energy and climate solutions through interdisciplinary research and diverse perspectives, while providing MIT students the benefit of real-world insights from an industry leader positioned to bring those ideas into the world at scale.”

The energy of change

An independent company since its spinoff from GE in April 2024, GE Vernova is focused on accelerating the global energy transition. The company generates approximately 25 percent of the world’s electricity — with the world’s largest installed base of over 7,000 gas turbines, about 57,000 wind turbines, and leading-edge electrification technology.

GE Vernova’s slogan, “The Energy of Change,” is reflected in decisions such as locating its headquarters in Cambridge, Massachusetts — in close proximity to MIT. In pursuing transformative approaches to the energy transition, the company has identified MIT as a key collaborator.

A key component of the mission to electrify and decarbonize the world is collaboration, according to CEO Scott Strazik. “We want to inspire, and be inspired by, students as we work together on our generation’s greatest challenge, climate change. We have great ambition for what we want the world to become, but we need collaborators. And we need folks that want to iterate with us on what the world should be from here.”

Representing the Healey-Driscoll administration at the launch event were Massachusetts Secretary of Energy and Environmental Affairs Rebecca Tepper and Secretary of the Executive Office of Economic Development Eric Paley. Secretary Tepper highlighted the Mass Leads Act, a $1 billion climate tech and life sciences initiative enacted by Governor Maura Healey last November to strengthen Massachusetts’ leadership in climate tech and AI.

“We're harnessing every part of the state, from hydropower manufacturing facilities to the blue-to-blue economy in our south coast, and right here at the center of our colleges and universities. We want to invent and scale the solutions to climate change in our own backyard,” said Tepper. “That’s been the Massachusetts way for decades.”

Real-world problems, insights, and solutions

The launch celebration featured interactive science displays and student presenters introducing the first round of 13 research projects led by MIT faculty. These projects focus on generating scalable solutions to our most pressing challenges in the areas of electrification, decarbonization, renewables acceleration, and digital solutions. Read more about the funded projects here.

Collaborating with industry offers the opportunity for researchers and students to address real-world problems informed by practical insights. The diverse, interdisciplinary perspectives from both industry and academia will significantly strengthen the research supported through the GE Vernova Fellowships announced at the launch event.

“I’m excited to talk to the industry experts at GE Vernova about the problems that they work on,” said GE Vernova Fellow Aaron Langham. “I’m looking forward to learning more about how real people and industries use electrical power.”

Fellow Julia Estrin echoed a similar sentiment: “I see this as a chance to connect fundamental research with practical applications — using insights from industry to shape innovative solutions in the lab that can have a meaningful impact at scale.”

GE Vernova’s commitment to research is also providing support and inspiration for fellows. “This level of substantive enthusiasm for new ideas and technology is what comes from a company that not only looks toward the future, but also has the resources and determination to innovate impactfully,” says Owen Mylotte, a GE Vernova Fellow.

The inaugural cohort of eight fellows will continue their research at MIT with tuition support from GE Vernova. Find the full list of fellows and their research topics here.

Pipeline of future energy leaders

Highlighting the alliance’s emphasis on cultivating student talent and leadership, GE Vernova CEO Scott Strazik introduced four MIT alumni who are now leaders at GE Vernova: Dhanush Mariappan SM ’03, PhD ’19, senior engineering manager in the GE Vernova Advanced Research Center; Brent Brunell SM ’00, technology director in the Advanced Research Center; Paolo Marone MBA ’21, CFO of wind; and Grace Caza MAP ’22, chief of staff in supply chain and operations.

The four shared their experiences of working with MIT as students and their hopes for the future of this alliance in the realm of “people development,” as Mariappan highlighted. “Energy transition means leaders. And every one of the innovative research and professional education programs that will come out of this alliance is going to produce the leaders of the energy transition industry.”

The alliance is underscoring its commitment to developing future energy leaders by supporting the New Engineering Education Transformation program (NEET) and expanding opportunities for student internships. With 100 new internships for MIT students announced in the days following the launch, GE Vernova is opening broad opportunities for MIT students at all levels to contribute to a sustainable future.

“GE Vernova has been a tremendous collaborator every step of the way, with a clear vision of the technical breakthroughs we need to affect change at scale and a deep respect for MIT’s strengths and culture, as well as a hunger to listen and learn from us as well,” said Betar Gallant, alliance director who is also the Kendall Rohsenow Associate Professor of Mechanical Engineering at MIT. “Students, take this opportunity to learn, connect, and appreciate how much you’re valued, and how bright your futures are in this area of decarbonizing our energy systems. Your ideas and insight are going to help us determine and drive what’s next.”

Daring to create the future we want

The launch event transformed MIT’s Lobby 13 with green lighting and animated conversation around the posters and hardware demos on display, reflecting the sense of optimism for the future and the type of change the alliance — and the Commonwealth of Massachusetts — seeks to advance.

“Because of this collaboration and the commitment to the work that needs doing, many things will be created,” said Secretary Paley. “People in this room will work together on all kinds of projects that will do incredible things for our economy, for our innovation, for our country, and for our climate.”

The alliance builds on MIT’s growing portfolio of initiatives around sustainable energy systems, including the Climate Project at MIT, a presidential initiative focused on developing solutions to some of the toughest barriers to an effective global climate response. “This new alliance is a significant opportunity to move the needle of energy and climate research as we dare to create the future that we want, with the promise of impactful solutions for the world,” said Evelyn Wang, MIT vice president for energy and climate, who attended the launch.

To that end, the alliance is supporting critical cross-institution efforts in energy and climate policy, including funding three master’s students in MIT Technology and Policy Program and hosting an annual symposium in February 2026 to advance interdisciplinary research. GE Vernova is also providing philanthropic support to the MIT Human Insight Collaborative. For 2025-26, this support will contribute to addressing global energy poverty by supporting the MIT Abdul Latif Jameel Poverty Action Lab (J-PAL) in its work to expand access to affordable electricity in South Africa.

“Our hope to our fellows, our hope to our students is this: While the stakes are high and the urgency has never been higher, the impact that you are going to have over the decades to come has never been greater,” said Roger Martella, chief corporate and sustainability officer at GE Vernova. “You have so much opportunity to move the world in a better direction. We need you to succeed. And our mission is to serve you and enable your success.”

With the alliance’s launch — and GE Vernova’s new membership in several other MIT consortium programs related to sustainability, automation and robotics, and AI, including the Initiative for New Manufacturing, MIT Energy Initiative, MIT Climate and Sustainability Consortium, and Center for Transportation and Logistics — it’s evident why Betar Gallant says the company is “all-in at MIT.”

The potential for tremendous impact on the energy industry is clear to those involved in the alliance. As GE Vernova Fellow Jack Morris said at the launch, “This is the beginning of something big.”


Q&A: On the ethics of catastrophe

Jack Carson, an MIT second-year undergraduate and EECS major, is the recent winner of the Elie Wiesel Prize in Ethics.


At first glimpse, student Jack Carson might appear too busy to think beyond his next problem set, much less tackle major works of philosophy. The second-year undergraduate, who plans to double major in electrical engineering with computing and mathematics, has been both an officer in Impact@MIT and a Social and Ethical Responsibility in Computing (SERC) Fellow in the MIT Schwarzman College of Computer Science — and is an active member of Concourse

But this fall, Carson was awarded first place in the Elie Wiesel Prize in Ethics Essay Contest for his entry, “We Know Only Men: Reading Emmanuel Levinas On The Rez,” a comparative exploration of Jewish and Cherokee ethical thought. The deeply researched essay links Carson’s hometown in Adair County, Oklahoma, to the village of Le Chambon sur Lignon, France, and attempts to answer the question: “What is to be done after catastrophe?” Carson explains in this interview.

Q: The prompt for your entry in the Elie Wiesel Prize in Ethics Essay Contest was: “What challenges awaken your conscience? Is it the conflicts in American society? An international crisis? Maybe a difficult choice you currently face or a hard decision you had to make?” How did you land on the topic you’d write about?

A: It was really an insight that just came to me as I struggled with reading Levinas, who is notoriously challenging. The Talmud is a tradition very far from my own, but, as I read Levinas’ lectures on the Talmud, I realized that his project is one that I can relate to: preserving a culture that has been completely displaced, where not destroyed. The more I read of Levinas’ work the more I realized that his philosophy of radical alterity — that you must act when confronted with another person who you can never really comprehend — arose naturally from his efforts to show how to preserve Jewish cultural continuity. In the same if less articulated way, the life I’ve witnessed in Eastern Oklahoma has led people to “act first, think later” — to use a Levinasian term. So it struck me that similar situations of displaced cultures had led to a similar ethical approach. Given that Levinas was writing about Jewish life in Eastern Europe and I was immersed in a heavily Native American culture, the congruence of the two ethical approaches seemed surprising. I thought, perhaps rightly, that it showed something essentially human that could be abstracted away from the very different cultural settings.

Q: Your entry for the contest is a meditation on the ethical similarities between ga-du-gi, the Cherokee concept of communal effort toward the betterment of all; the actions of the Huguenot inhabitants of the French village of Le Chambon sur Lignon (who protected thousands of Jewish refugees during Nazi occupation); and the Jewish philosopher Emmanuel Levinas’ interpretation of the Talmud, which essentially posits that action must come first in an ethical framework, not second. Did you find your own personal philosophy changing as a result of engaging with these ideas — or, perhaps more appropriately — have you noticed your everyday actions changing? 

A: Yes, definitely my personal philosophy has been affected by thinking through Levinas’ demanding approach. Like a lot of people, I sit around thinking through what ethical approach I prefer. Should I be a utilitarian? A virtue theorist? A Kantian? Something else? Levinas had no time for this. He urged acting, not thinking, when confronted with human need. I wrote about the resistance movement of Le Chambon because those brave citizens also just acted without thinking — in a very Levinasian way. That seems a strange thing to valorize, as we are often taught to think before you act, and this is probably good advice! But sometimes you can think your way right out of helping people in need. 

Levinas instructed that you should act in the face of the overwhelming need of what he would call the “Other.” That’s a rather intimidating term, but I read it as meaning just “other people.” The Le Chambon villagers, who protected Jews fleeing the Nazis, and the Cherokees lived this, acting in an almost pre-theoretical way in helping people in need that is really quite beautiful. And for Levinas, I’d note that the problematic word is “because.” And I wrote about how “because” is indeed a thin reed that the murderers will always break. 

Put a little differently, “because” suggests that you have to have “reasons” that complete the phrase and make it coherent. This might seem almost a matter of logic. But Levinas says no. Because the genocide starts when the reasons are attacked. For example, you might believe we should help some persecuted group “because” they are really just like you and me. And that’s true, of course. But Levinas knows that the killers always start by dehumanizing their targets, so they convince you that the victims are not really like you at all, but are more like “vermin” or “insects.” So the “because” condition fails, and that’s when the murdering starts. So you should just act and then think, says Levinas, and this immunizes you from that rhetorical poison. It’s a counterintuitive idea, but powerful when you really think about it.

Q: You open with a particularly striking question: What is to be done after catastrophe? Do you feel more sure of your answer, now that you’ve deeply considered these disparate response to a catastrophic event — or do you have more questions? 

A: I am still not sure what to do after world-historical catastrophes like genocides. I guess I’d say there is nothing to do — other than maintain a kind of radical hope that has no basis in evidence. “Catastrophes” like those I write about — the Holocaust, the Trail of Tears — are more than just acts of physical destruction. They destroy whole ways of being and uproot whole systems of meaning-making. Cultural concepts become void overnight, as their preconditions are destroyed. 

There is a great book by Jonathan Lear called “Radical Hope.” It begins with a discussion of a Plains Indian leader named Plenty Coups. After removal to the reservation in the 19th century, he is quoted as saying, “But when the buffalo went away the hearts of my people fell to the ground, and they could not lift them up again. After this nothing happened.” Lear ponders what that last sentence is all about. What did Plenty Coups mean when he said “after this nothing happened?” Obviously, life’s daily activities still happened: births, deaths, eating, drinking, and such. So what does it mean? It’s perplexing. In the end, Lear concludes that Plenty Coups was making an ontological statement, in which he meant that all of the things that gave life meaning — all of those things that make the word “happen” actually signify something — had been erased. Events occurred, but didn’t “happen” because they fell into a world that to Plenty Coups lacked any sense at all. And Plenty Coups was not wrong about this; for him and his people, the world lost intelligibility. Nonetheless, Plenty Coups continued to lead his people, even amidst great deprivation, even though he never found a new basis for belief. He only had “radical hope” — which gave Lears’ book its name — that some new way of life might arise over time. I guess my answer to “what happens after catastrophe?” is just, well, “nothing happens” in the sense Plenty Coups meant it. And “radical hope” is all you get, if anything.

Q: There’s a memorable scene in your essay in which, during a visit to your community cemetery near Stilwell, your grandfather points out the burial plots that hold both your ancestors, and that will eventually hold him and you. You describe this moment beautifully as a comforting and connective chain linking you to both past and future communities. How does being part of that chain shape your life? 

A: I feel this sense of knowing where you will be buried — alongside all of your ancestors — is a great gift. That sounds a little odd, but it gives a rootedness that is very removed from most people’s experience today. And the cemetery is just a stand-in for a whole cultural structure that gives me a sense of role and responsibility. The lack of these, I think, creates a real sense of alienation, and this alienation is the condition of our age. So I feel lucky to have a strong sense of place and a place that will always be home. Lincoln talked about the “mystic chords of memory.” I feel this very mystical attachment to Oklahoma. The idea that this road or this community is one where every member of your family for generations has lived — or even if they moved away, always considered “home” — is very powerful. It always gives an answer to “Who are you?” That’s a hard question, but I can always say, “We are from Adair County,” and this is a sufficient answer. And back home, people would instantly nod their heads at the adequacy of this response. As I said, it’s a little mystical, but maybe that’s a strength, not a weakness.

Q: People might be surprised to learn that the winner of an essay contest focusing on ethics is actually not an English or philosophy major, but is instead in EECS. What areas and current issues in the field do you find interesting from an ethical perspective?

A: I think the pace of technological change — and society’s struggle to keep up — shows you how important philosophy, literature, history, and the liberal arts really are. Whether it’s algorithmic bias affecting real lives, or questions about what values we encode in AI systems, these aren’t just technical problems, but fundamentally about who we are and what we owe each other. It is true that I’m majoring in 6-5 [electrical engineering with computing] and 18 [mathematics], and of course these disciplines are extraordinarily important. But the humanities are something very important to me, as they do answer fundamental questions about who we are, what we owe to others, why people act this way or that, and how we should think through social issues. I despair when I hear brilliant engineers say they read nothing longer than a blog post. If anything, the humanities should be more important overall at MIT. 

When I was younger, I just happened across a discussion of CP Snow’s famous essay on the “Two Cultures.” In it, he talks about his scientist friends who had never read Shakespeare, and his literary friends who couldn’t explain thermodynamics. In a modest way, I’ve always thought that I’d like my education to be one that allowed me to participate in the two cultures. The essay on Levinas is my attempt to pursue this type of education.


Study suggests 40Hz sensory stimulation may benefit some Alzheimer’s patients for years

Five volunteers received 40Hz stimulation for around two years after an early-stage clinical study. Those with late-onset Alzheimer’s performed better on assessments than Alzheimer’s patients outside the trial.


A new research paper documents the outcomes of five volunteers who continued to receive 40Hz light and sound stimulation for around two years after participating in an MIT early-stage clinical study of the potential Alzheimer’s disease (AD) therapy. The results show that for the three participants with late-onset Alzheimer’s disease, several measures of cognition remained significantly higher than comparable Alzheimer’s patients in national databases. Moreover, in the two late-onset volunteers who donated plasma samples, levels of Alzheimer’s biomarker tau proteins were significantly decreased.

The three volunteers who experienced these benefits were all female. The two other participants, each of whom were males with early-onset forms of the disease, did not exhibit significant benefits after two years. The dataset, while small, represents the longest-term test so far of the safe, noninvasive treatment method (called GENUS, for gamma entrainment using sensory stimuli), which is also being evaluated in a nationwide clinical trial run by MIT-spinoff company Cognito Therapeutics.

“This pilot study assessed the long-term effects of daily 40Hz multimodal GENUS in patients with mild AD,” the authors wrote in an open-access paper in Alzheimer's & Dementia: The Journal of the Alzheimer’s Association. “We found that daily 40Hz audiovisual stimulation over 2 years is safe, feasible, and may slow cognitive decline and biomarker progression, especially in late-onset AD patients.”

Diane Chan, a former research scientist in The Picower Institute for Learning and Memory and a neurologist at Massachusetts General Hospital, is the study’s lead and co-corresponding author. Picower Professor Li-Huei Tsai, director of The Picower Institute and the Aging Brain Initiative at MIT, is the study’s senior and co-corresponding author.

An “open label” extension

In 2020, MIT enrolled 15 volunteers with mild Alzheimer’s disease in an early-stage trial to evaluate whether an hour a day of 40Hz light and sound stimulation, delivered via an LED panel and speaker in their homes, could deliver clinically meaningful benefits. Several studies in mice had shown that the sensory stimulation increases the power and synchrony of 40Hz gamma frequency brain waves, preserves neurons and their network connections, reduces Alzheimer’s proteins such as amyloid and tau, and sustains learning and memory. Several independent groups have also made similar findings over the years.

MIT’s trial, though cut short by the Covid-19 pandemic, found significant benefits after three months. The new study examines outcomes among five volunteers who continued to use their stimulation devices on an “open label” basis for two years. These volunteers came back to MIT for a series of tests 30 months after their initial enrollment. Because four participants started the original trial as controls (meaning they initially did not receive 40Hz stimulation), their open label usage was six to nine months shorter than the 30-month period.

The testing at zero, three, and 30 months of enrollment included measurements of their brain wave response to the stimulation, MRI scans of brain volume, measures of sleep quality, and a series of five standard cognitive and behavioral tests. Two participants gave blood samples. For comparison to untreated controls, the researchers combed through three national databases of Alzheimer’s patients, matching thousands of them on criteria such as age, gender, initial cognitive scores, and retests at similar time points across a 30-month span.

Outcomes and outlook

The three female late-onset Alzheimer’s volunteers showed improvement or slower decline on most of the cognitive tests, including significantly positive differences compared to controls on three of them. These volunteers also showed increased brain-wave responsiveness to the stimulation at 30 months and showed improvement in measures of circadian rhythms. In the two late-onset volunteers who gave blood samples, there were significant declines in phosphorylated tau (47 percent for one and 19.4 percent for the other) on a test recently approved by the U.S. Food and Drug Administration as the first plasma biomarker for diagnosing Alzheimer’s.

“One of the most compelling findings from this study was the significant reduction of plasma pTau217, a biomarker strongly correlated with AD pathology, in the two late-onset patients in whom follow-up blood samples were available,” the authors wrote in the journal. “These results suggest that GENUS could have direct biological impacts on Alzheimer’s pathology, warranting further mechanistic exploration in larger randomized trials.”

Although the initial trial results showed preservation of brain volume at three months among those who received 40Hz stimulation, that was not significant at the 30-month time point. And the two male early-onset volunteers did not show significant improvements on cognitive test scores. Notably, the early onset patients showed significantly reduced brain-wave responsiveness to the stimulation.

Although the sample is small, the authors hypothesize that the difference between the two sets of patients is likely attributable to the difference in disease onset, rather than the difference in gender.

“GENUS may be less effective in early onset Alzheimer’s disease patients, potentially owing to broad pathological differences from late-onset Alzheimer’s disease that could contribute to differential responses,” the authors wrote. “Future research should explore predictors of treatment response, such as genetic and pathological markers.”

Currently, the research team is studying whether GENUS may have a preventative effect when applied before disease onset. The new trial is recruiting participants aged 55-plus with normal memory who have or had a close family member with Alzheimer's disease, including early-onset.

In addition to Chan and Tsai, the paper’s other authors are Gabrielle de Weck, Brennan L. Jackson, Ho-Jun Suk, Noah P. Milman, Erin Kitchener, Vanesa S. Fernandez Avalos, MJ Quay, Kenji Aoki, Erika Ruiz, Andrew Becker, Monica Zheng, Remi Philips, Rosalind Firenze, Ute Geigenmüller, Bruno Hammerschlag, Steven Arnold, Pia Kivisäkk, Michael Brickhouse, Alexandra Touroutoglou, Emery N. Brown, Edward S. Boyden, Bradford C. Dickerson, and Elizabeth B. Klerman.

Funding for the research came from the Freedom Together Foundation, the Robert A. and Renee E. Belfer Family Foundation, the Eleanor Schwartz Charitable Foundation, the Dolby Family, Che King Leo, Amy Wong and Calvin Chin, Kathleen and Miguel Octavio, the Degroof-VM Foundation, the Halis Family Foundation, Chijen Lee, Eduardo Eurnekian, Larry and Debora Hilibrand, Gary Hua and Li Chen, Ko Hahn Family, Lester Gimpelson, David B Emmes, Joseph P. DiSabato and Nancy E. Sakamoto, Donald A. and Glenda G. Mattes, the Carol and Gene Ludwig Family Foundation, Alex Hu and Anne Gao, Elizabeth K. and Russell L. Siegelman, the Marc Haas Foundation, Dave and Mary Wargo, James D. Cook, and the Nobert H. Hardner Foundation.


John Marshall and Erin Kara receive postdoctoral mentoring award

Faculty recognized for the exceptional professional and personal guidance they provide postdocs.


Shining a light on the critical role of mentors in a postdoc’s career, the MIT Postdoctoral Association presented the fourth annual Excellence in Postdoctoral Mentoring Awards to professors John Marshall and Erin Kara.

The awards honor faculty and principal investigators who have distinguished themselves across four areas: the professional development opportunities they provide, the work environment they create, the career support they provide, and their commitment to continued professional relationships with their mentees. 

They were presented at the annual Postdoctoral Appreciation event hosted by the Office of the Vice President for Research (VPR), on Sept. 17.

An MIT Postdoctoral Association (PDA) committee, chaired this year by Danielle Coogan, oversees the awards process in coordination with VPR and reviews nominations by current and former postdocs. “[We’re looking for] someone who champions a researcher, a trainee, but also challenges them,” says Bettina Schmerl, PDA president in 2024-25. “Overall, it’s about availability, reasonable expectations, and empathy. Someone who sees the postdoctoral scholar as a person of their own, not just someone who is working for them.” Marshall’s and Kara’s steadfast dedication to their postdocs set them apart, she says.

Speaking at the VPR resource fair during National Postdoc Appreciation Week, Vice President for Research Ian Waitz acknowledged “headwinds” in federal research funding and other policy issues, but urged postdocs to press ahead in conducting the very best research. “Every resource in this room is here to help you succeed in your path,” he said.

Waitz also commented on MIT’s efforts to strengthen postdoctoral mentoring over the last several years, and the influence of these awards in bringing lasting attention to the importance of mentoring. “The dossiers we’re getting now to nominate people [for the awards] may have five, 10, 20 letters of support,” he noted. “What we know about great mentoring is that it carries on between academic generations. If you had a great mentor, then you are more likely to be an amazing mentor once you’ve seen it demonstrated.”

Ann Skoczenski, director of MIT Postdoctoral Services, works closely with Waitz and the Postdoctoral Association to address the goals and concerns of MIT’s postdocs to ensure a successful experience at the Institute. “The PDA and the whole postdoctoral community do critical work at MIT, and it’s a joy to recognize them and the outstanding mentors who guide them,” said Skoczenski.

A foundation in good science

The awards recognize excellent mentors in two categories. Marshall, professor of oceanography in the Department of Earth, Atmospheric and Planetary Sciences, received the “Established Mentor Award.” 

Nominators described Marshall’s enthusiasm for research as infectious, creating an exciting work environment that sets the tone. “John’s mentorship is unique in that he immerses his mentees in the heart of cutting-edge research. His infectious curiosity and passion for scientific excellence make every interaction with him a thrilling and enriching experience,” one postdoc wrote.

At the heart of Marshall’s postdoc relationships is a straightforward focus on doing good science and working alongside postdocs and students as equals. As one nominator wrote, “his approach is centered on empowering his mentees to assume full responsibility for their work, engage collaboratively with colleagues, and make substantial contributions to the field of science.” 

His high expectations are matched by the generous assistance he provides his postdocs when needed. “He balances scientific rigor with empathy, offers his time generously, and treats his mentees as partners in discovery,” a nominator wrote.

Navigating career decisions and gaining the right experience along the way are important aspects of the postdoc experience. “When it was time for me to move to a different step in my career, John offered me the opportunities to expand my skills by teaching, co-supervising PhD students, working independently with other MIT faculty members, and contributing to grant writing,” one postdoc wrote. 

Marshall’s research group has focused on ocean circulation and coupled climate dynamics involving interactions between motions on different scales, using theory, laboratory experiments, observations and innovative approaches to global ocean modeling.

“I’ve always told my postdocs, if you do good science, everything will sort itself out. Just do good work,” Marshall says. “And I think it’s important that you allow the glory to trickle down.” 

Marshall sees postdoc appointments as a time they can learn to play to their strengths while focusing on important scientific questions. “Having a great postdoc [working] with you and then seeing them going on to great things, it’s such a pleasure to see them succeed,” he says. 

“I’ve had a number of awards. This one means an awful lot to me, because the students and the postdocs matter as much as the science.”

Supporting the whole person

Kara, associate professor of physics, received the “Early Career Mentor Award.”

Many nominators praised Kara’s ability to give advice based on her postdocs’ individual goals. “Her mentoring style is carefully tailored to the particular needs of every individual, to accommodate and promote diverse backgrounds while acknowledging different perspectives, goals, and challenges,” wrote one nominator.

Creating a welcoming and supportive community in her research group, Kara empowers her postdocs by fostering their independence. “Erin’s unique approach to mentorship reminds us of the joy of pursuing our scientific curiosities, enables us to be successful researchers, and prepares us for the next steps in our chosen career path,” said one. Another wrote, “Rather than simply giving answers, she encourages independent thinking by asking the right questions, helping me to arrive at my own solutions and grow as a researcher.”

Kara’s ability to offer holistic, nonjudgmental advice was a throughline in her nominations. “Beyond her scientific mentorship, what truly sets Erin apart is her thoughtful and honest guidance around career development and life beyond work,” one wrote. Another nominator highlighted their positive relationship, writing, “I feel comfortable sharing my concerns and challenges with her, knowing that I will be met with understanding, insightful advice, and unwavering support.” 

Kara’s research group is focused on understanding the physics behind how black holes grow and affect their environments. Kara has advanced a new technique called X-ray reverberation mapping, which allows astronomers to map the gas falling on to black holes and measure the effects of strongly curved spacetime close to the event horizon. 

“I feel like postdocs hold a really special place in our research groups because they come with their own expertise,” says Kara. “I’ve hired them particularly because I want to learn and grow from them as well, and hopefully vice versa.” Kara focuses her mentorship on providing for autonomy, giving postdocs their own mentorship opportunities, and treating them like colleagues.

A postdoc appointment “is this really pivotal time in your career, when you’re figuring out what it is you want to do with the rest of your life,” she says. “So if I can help postdocs navigate that by giving them some support, but also giving them independence to be able to take their next steps, that feels incredibly valuable.”

“I just feel like they make my work/life so rich, and it’s not a hard thing to mentor them because they all are such awesome people and they make our research group really fun.”


From nanoscale to global scale: Advancing MIT’s special initiatives in manufacturing, health, and climate

MIT.nano cleanroom complex named after Robert Noyce PhD ’53 at the 2025 Nano Summit.


“MIT.nano is essential to making progress in high-priority areas where I believe that MIT has a responsibility to lead,” opened MIT president Sally Kornbluth at the 2025 Nano Summit. “If we harness our collective efforts, we can make a serious positive impact.”

It was these collective efforts that drove discussions at the daylong event hosted by MIT.nano and focused on the importance of nanoscience and nanotechnology across MIT's special initiatives — projects deemed critical to MIT’s mission to help solve the world’s greatest challenges. With each new talk, common themes were reemphasized: collaboration across fields, solutions that can scale up from lab to market, and the use of nanoscale science to enact grand-scale change.

“MIT.nano has truly set itself apart, in the Institute's signature way, with an emphasis on cross-disciplinary collaboration and open access,” said Kornbluth. “Today, you're going to hear about the transformative impact of nanoscience and nanotechnology, and how working with the very small can help us do big things for the world together.”

Collaborating on health

Angela Koehler, faculty director of the MIT Health and Life Sciences Collaborative (MIT HEALS) and the Charles W. and Jennifer C. Johnson Professor of Biological Engineering, opened the first session with a question: How can we build a community across campus to tackle some of the most transformative problems in human health? In response, three speakers shared their work enabling new frontiers in medicine.

Ana Jaklenec, principal research scientist at the Koch Institute for Integrative Cancer Research, spoke about single-injection vaccines, and how her team looked to the techniques used in fabrication of electrical engineering components to see how multiple pieces could be packaged into a tiny device. “MIT.nano was instrumental in helping us develop this technology,” she said. “We took something that you can do in microelectronics and the semiconductor industry and brought it to the pharmaceutical industry.”

While Jaklenec applied insight from electronics to her work in health care, Giovanni Traverso, the Karl Van Tassel Career Development Professor of Mechanical Engineering, who is also a gastroenterologist at Brigham and Women’s Hospital, found inspiration in nature, studying the cephalopod squid and remora fish to design ingestible drug delivery systems. Representing the industry side of life sciences, Mirai Bio senior vice president Jagesh Shah SM ’95, PhD ’99 presented his company’s precision-targeted lipid nanoparticles for therapeutic delivery. Shah, as well as the other speakers, emphasized the importance of collaboration between industry and academia to make meaningful impact, and the need to strengthen the pipeline for young scientists.

Manufacturing, from the classroom to the workforce

Paving the way for future generations was similarly emphasized in the second session, which highlighted MIT’s Initiative for New Manufacturing (MIT INM). “MIT’s dedication to manufacturing is not only about technology research and education, it’s also about understanding the landscape of manufacturing, domestically and globally,” said INM co-director A. John Hart, the Class of 1922 Professor and head of the Department of Mechanical Engineering. “It’s about getting people — our graduates who are budding enthusiasts of manufacturing — out of campus and starting and scaling new companies,” he said.

On progressing from lab to market, Dan Oran PhD ’21 shared his career trajectory from technician to PhD student to founding his own company, Irradiant Technologies. “How are companies like Dan’s making the move from the lab to prototype to pilot production to demonstration to commercialization?” asked the next speaker, Elisabeth Reynolds, professor of the practice in urban studies and planning at MIT. “The U.S. capital market has not historically been well organized for that kind of support.” She emphasized the challenge of scaling innovations from prototype to production, and the need for workforce development.

“Attracting and retaining workforce is a major pain point for manufacturing businesses,” agreed John Liu, principal research scientist in mechanical engineering at MIT. To keep new ideas flowing from the classroom to the factory floor, Liu proposes a new worker type in advanced manufacturing — the technologist — someone who can be a bridge to connect the technicians and the engineers.

Bridging ecosystems with nanoscience

Bridging people, disciplines, and markets to affect meaningful change was also emphasized by Benedetto Marelli, mission director for the MIT Climate Project and associate professor of civil and environmental engineering at MIT.

“If we’re going to have a tangible impact on the trajectory of climate change in the next 10 years, we cannot do it alone,” he said. “We need to take care of ecology, health, mobility, the built environment, food, energy, policies, and trade and industry — and think about these as interconnected topics.”

Faculty speakers in this session offered a glimpse of nanoscale solutions for climate resiliency. Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering, presented his group’s work on using nanoparticles to turn waste methane and urea into renewable materials. Desirée Plata, the School of Engineering Distinguished Climate and Energy Professor, spoke about scaling carbon dioxide removal systems. Mechanical engineering professor Kripa Varanasi highlighted, among other projects, his lab’s work on improving agricultural spraying so pesticides adhere to crops, reducing agricultural pollution and cost.

In all of these presentations, the MIT faculty highlighted the tie between climate and the economy. “The economic systems that we have today are depleting to our resources, inherently polluting,” emphasized Plata. “The goal here is to use sustainable design to transition the global economy.”

What do people do at MIT.nano?

This is where MIT.nano comes in, offering shared access facilities where researchers can design creative solutions to these global challenges. “What do people do at MIT.nano?” asked associate director for Fab.nano Jorg Scholvin ’00, MNG ’01, PhD ’06 in the session on MIT.nano’s ecosystem. With 1,500 individuals and over 20 percent of MIT faculty labs using MIT.nano, it’s a difficult question to quickly answer. However, in a rapid-fire research showcase, students and postdocs gave a response that spanned 3D transistors and quantum devices to solar solutions and art restoration. Their work reflects the challenges and opportunities shared at the Nano Summit: developing technologies ready to scale, uniting disciplines to tackle complex problems, and gaining hands-on experience that prepares them to contribute to the future of hard tech.

The researchers’ enthusiasm carried the excitement and curiosity that President Kornbluth mentioned in her opening remarks, and that many faculty emphasized throughout the day. “The solutions to the problems we heard about today may come from inventions that don't exist yet,” said Strano. “These are some of the most creative people, here at MIT. I think we inspire each other.”

Robert N. Noyce (1953) Cleanroom at MIT.nano

Collaborative inspiration is not new to the MIT culture. The Nano Summit sessions focused on where we are today, and where we might be going in the future, but also reflected on how we arrived at this moment. Honoring visionaries of nanoscience and nanotechnology, President Emeritus L. Rafael Reif delivered the closing remarks and an exciting announcement — the dedication of the MIT.nano cleanroom complex. Made possible through a gift by Ray Stata SB ’57, SM ’58, this research space, 45,000 square feet of ISO 5, 6, and 7 cleanrooms, will be named the Robert N. Noyce (1953) Cleanroom.

“Ray Stata was — and is — the driving force behind nanoscale research at MIT,” said Reif. “I want to thank Ray, whose generosity has allowed MIT to honor Robert Noyce in such a fitting way.”

Ray Stata co-founded Analog Devices in 1965, and Noyce co-founded Fairchild Semiconductor in 1957, and later Intel in 1968. Noyce, widely regarded as the “Mayor of Silicon Valley,” became chair of the Semiconductor Industry Association in 1977, and over the next 40 years, semiconductor technology advanced a thousandfold, from micrometers to nanometers.

“Noyce was a pioneer of the semiconductor industry,” said Stata. “It is due to his leadership and remarkable contributions that electronics technology is where it is today. It is an honor to be able to name the MIT.nano cleanroom after Bob Noyce, creating a permanent tribute to his vision and accomplishments in the heart of the MIT campus.”

To conclude his remarks and the 2025 Nano Summit, Reif brought the nano journey back to today, highlighting technology giants such as Lisa Su ’90, SM ’91, PhD ’94, for whom Building 12, the home of MIT.nano, is named. “MIT has educated a large number of remarkable leaders in the semiconductor space,” said Reif. “Now, with the Robert Noyce Cleanroom, this amazing MIT community is ready to continue to shape the future with the next generation of nano discoveries — and the next generation of nano leaders, who will become living legends in their own time.”


Phil Sharp-Alnylam Fund for Emerging Scientists to support MIT biology graduate students and faculty

Alnylam Pharmaceuticals establishes named fund in honor of its co-founder, an MIT Institute Professor and Nobel laureate.


It’s no question that graduate school in fundamental research was never for the faint of heart, but academia’s nationwide funding disruptions threaten not just research happening now, but the critical pipeline for the next generation of scientists.

“What’s keeping me up at night is the uncertainty,” says MIT Institute Professor and Nobel laureate Phillip A. Sharp, who is also professor of biology emeritus and intramural faculty at the Koch Institute for Integrative Cancer Research.

In the short term, Sharp foresees challenges in sustaining students so they can complete their degrees, postdocs to finish their professional preparation, and faculty to set up and sustain their labs. In the long term, the impact becomes potentially existential — fewer people pursuing academia now means fewer advancements in the decades to come.

So, when Sharp was looped into discussions about a gift in his honor, he knew exactly where it should be directed. Established this year thanks to a generous donation from Alnylam Pharmaceuticals, the Phil Sharp-Alnylam Fund for Emerging Scientists will support graduate students and faculty within life sciences.

“This generosity by Alnylam provides an opportunity to bridge the uncertainty and ideally create the environment where students and others will feel that it’s possible to do science and have a career,” Sharp says. 

The fund is set up to be flexible, so the expendable gift can be used to address the evolving needs of the Department of Biology, including financial support, research grants, and seed funding. 

“This fund will help us fortify the department’s capacity to train new generations of life science innovators and leaders,” says department head Amy E. Keating, the Jay A. Stein (1968) Professor of Biology and professor of biological engineering. “It is a great privilege for the department to be part of this recognition of Phil’s key role at Alnylam.”

Alnylam Pharmaceuticals, a company Sharp co-founded in 2002, is, in fact, a case study for the type of long-term investment in fundamental discovery that leads to paradigm-shifting strides in biomedical science, such as: What if the genetic drivers of diseases could be silenced by harnessing a naturally occurring gene regulation process?

Good things take time

In 1998, Andrew Fire PhD ’83, who was trained as a graduate student in the Sharp Lab at MIT, and Craig Mello published a paper showing that double-stranded RNA suppresses the expression of the protein from the gene that encodes its sequence. The process, known as RNA interference, was such a groundbreaking revelation that Fire and Mello shared a Nobel Prize in Medicine and Physiology less than a decade later. 

RNAi is an innate cellular gene regulation process that can, for example, assist cells in defending against viruses by degrading viral RNA, thereby interfering with the production of viral proteins. Taking advantage of this natural process to fine-tune the expression of genes that encode specific proteins was a promising option for disease treatment, as many diseases are caused by the creation or buildup of mutated or faulty proteins. This approach would address the root cause of the disease, rather than its downstream symptoms.

The details of the biochemistry of RNAi were characterized and patented, and in 2002, Alnylam was founded by Sharp, David Bartel, Paul Schimmel PhD ’67, Thomas Tuschl, and Phillip Zamore SM ’86. 

“Sixteen years later, we got our first approval for a totally novel therapeutic agent to treat disease,” Sharp says. “Something in a research laboratory, translated in about as short a time as you can do, gave rise to this whole new way of treating critical diseases.” 

This timeline isn’t atypical. Particularly in health care, Sharp notes, investments often occur five or 10 years before they come to fruition. 

“Phil Sharp’s visionary idea of harnessing RNAi to treat disease brought brilliant people together to pioneer this new class of medicines. RNAi therapeutics would not exist without the bridge Phil built between academia and industry. Now there are six approved Alnylam-discovered RNAi therapeutics, and we are exploring potential treatments for a range of rare and prevalent diseases to improve the lives of many more patients in need,” says Kevin Fitzgerald, chief scientific officer of Alnylam Pharmaceuticals. 

Today, the company has grown to over 2,500 employees, markets its six approved treatments worldwide, and has a long list of research programs that are likely to yield new therapeutic agents in the years to come.

Change is always on the horizon

Sharp foresees potential benefits for companies that contribute to academia, in the way Alnylam Pharmaceuticals has through the Phil Sharp-Alnylam Fund for Emerging Scientists. 

“We are proud to support the MIT Department of Biology because investments in both early-stage and high-risk research have the potential to unlock the next wave of medical breakthroughs to help so many patients waiting for hope throughout the world,” says Yvonne Greenstreet, CEO of Alnylam Pharmaceuticals. 

It is prudent for industry to keep its finger on the pulse — for becoming aware of new talent and for anticipating landscape-shifting advancements, such as artificial intelligence. Sharp notes that academia, in its pursuit of fundamental knowledge, “creates new ideas, new opportunities, and new ways of doing things.” 

“All of society, including biotech, is anticipating that AI is going to be a great accelerator,” Sharp says. “Being associated with institutions that have great biology, chemistry, neuroscience, engineering, and computational innovation is how you sort through this anticipation of what the future is going to be.”

But, Sharp says, it’s a two-way street: Academia should also be asking how it can best support the future workplaces for their students who will go on to have careers in industry. To that end, the Department of Biology recently launched a career connections initiative for current trainees to draw on the guidance and experience of alums, and to learn how to hone their knowledge so that they are a value-add to industry’s needs.  

“The symbiotic nature of these relationships is healthy for the country, and for society, all the way from basic research through innovative companies of all sizes, health-care delivery, hospitals, and right down to primary care physicians meeting one-on-one with patients,” Sharp says. “We’re all part of that, and unless all parts of it remain healthy and appreciated, it will bode poorly for the future of the country’s economy and well-being.”


Leading quantum at an inflection point

The MIT Quantum Initiative is taking shape, leveraging quantum breakthroughs to drive the future of scientific and technological progress.


Danna Freedman is seeking the early adopters.

She is the faculty director of the nascent MIT Quantum Initiative, or QMIT. In this new role, Freedman is giving shape to an ambitious, Institute-wide effort to apply quantum breakthroughs to the most consequential challenges in science, technology, industry, and national security.

The interdisciplinary endeavor, the newest of MIT President Sally Kornbluth’s strategic initiatives, will bring together MIT researchers and domain experts from a range of industries to identify and tackle practical challenges wherever quantum solutions could achieve the greatest impact.

“We’ve already seen how the breadth of progress in quantum has created opportunities to rethink the future of security and encryption, imagine new modes of navigation, and even measure gravitational waves more precisely to observe the cosmos in an entirely new way,” says Freedman, the Frederick George Keyes Professor of Chemistry. “What can we do next? We’re investing in the promise of quantum, and where the legacy will be in 20 years.”

QMIT — the name is a nod to the “qubit,” the basic unit of quantum information — will formally launch on Dec. 8 with an all-day event on campus. Over time, the initiative plans to establish a physical home in the heart of campus for academic, public, and corporate engagement with state-of-the-art integrated quantum systems. Beyond MIT’s campus, QMIT will also work closely with the U.S. government and MIT Lincoln Laboratory, applying the lab’s capabilities in quantum hardware development, systems engineering, and rapid prototyping to national security priorities.

“The MIT Quantum Initiative seizes a timely opportunity in service to the nation’s scientific, economic, and technological competitiveness,” says Ian A. Waitz, MIT’s vice president for research. “With quantum capabilities approaching an inflection point, QMIT will engage students and researchers across all our schools and the college, as well as companies around the world, in thinking about what a step change in sensing and computational power will mean for a wide range of fields. Incredible opportunities exist in health and life sciences, fundamental physics research, cybersecurity, materials science, sensing the world around us, and more.”

Identifying the right questions

Quantum phenomena are as foundational to our world as light or gravity. At an extremely small scale, the interactions of atoms and subatomic particles are controlled by a different set of rules than the physical laws of the macro-sized world. These rules are called quantum mechanics.

“Quantum, in a sense, is what underlies everything,” says Freedman.

By leveraging quantum properties, quantum devices can process information at incredible speed to solve complex problems that aren’t feasible on classical supercomputers, and to enable ultraprecise sensing and measurement. Those improvements in speed and precision will become most powerful when optimized in relation to specific use cases, and as part of a complete quantum system. QMIT will focus on collaboration across domains to co-develop quantum tools, such as computers, sensors, networks, simulations, and algorithms, alongside the intended users of these systems.

As it develops, QMIT will be organized into programmatic pillars led by top researchers in quantum including Paola Cappellaro, Ford Professor of Engineering and professor of nuclear science and engineering and of physics; Isaac Chuang, Julius A. Stratton Professor in Electrical Engineering and Physics; Pablo Jarillo-Herrero, Cecil and Ida Green Professor of Physics; William Oliver, Henry Ellis Warren (1894) Professor of Electrical Engineering and Computer Science and professor of physics; Vladan Vuletić, Lester Wolfe Professor of Physics; and Jonilyn Yoder, associate leader of the Quantum-Enabled Computation Group at MIT Lincoln Laboratory.

While supporting the core of quantum research in physics, engineering, mathematics, and computer science, QMIT promises to expand the community at its frontiers, into astronomy, biology, chemistry, materials science, and medicine.

“If you provide a foundation that somebody can integrate with, that accelerates progress a lot,” says Freedman. “Perhaps we want to figure out how a quantum simulator we’ve built can model photosynthesis, if that’s the right question — or maybe the right question is to study 10 failed catalysts to see why they failed.”

“We are going to figure out what real problems exist that we could approach with quantum tools, and work toward them in the next five years,” she adds. “We are going to change the forward momentum of quantum in a way that supports impact.”

The MIT Quantum Initiative will be administratively housed in the Research Laboratory of Electronics (RLE), with support from the Office of the Vice President for Research (VPR) and the Office of Innovation and Strategy.

QMIT is a natural expansion of MIT’s Center for Quantum Engineering (CQE), a research powerhouse that engages more than 80 principal investigators across the MIT campus and Lincoln Laboratory to accelerate the practical application of quantum technologies.

“CQE has cultivated a tremendously strong ecosystem of students and researchers, engaging with U.S. government sponsors and industry collaborators, including through the popular Quantum Annual Research Conference (QuARC) and professional development classes,” says Marc Baldo, the Dugald C. Jackson Professor in Electrical Engineering and director of RLE.

“With the backing of former vice president for research Maria Zuber, former Lincoln Lab director Eric Evans, and Marc Baldo, we launched CQE and its industry membership group in 2019 to help bridge MIT’s research efforts in quantum science and engineering,” says Oliver, CQE’s director, who also spent 20 years at Lincoln Laboratory, most recently as a Laboratory Fellow. “We have an important opportunity now to deepen our commitment to quantum research and education, and especially in engaging students from across the Institute in thinking about how to leverage quantum science and engineering to solve hard problems.”

Two years ago, Peter Fisher, the Thomas A. Frank (1977) Professor of Physics, in his role as associate vice president for research computing and data, assembled a faculty group led by Cappellaro and involving Baldo, Oliver, Freedman, and others, to begin to build an initiative that would span the entire Institute. Now, capitalizing on CQE’s success, Oliver will lead the new MIT Quantum Initiative’s quantum computing pillar, which will broaden the work of CQE into a larger effort that focuses on quantum computing, industry engagement, and connecting with end users.

The “MIT-hard” problem

QMIT will build upon the Institute’s historic leadership in quantum science and engineering. In the spring of 1981, MIT hosted the first Physics of Computation Conference at the Endicott House, bringing together nearly 50 physics and computing researchers to consider the practical promise of quantum — an intellectual moment that is now widely regarded as the kickoff of the second quantum revolution. (The first was the fundamental articulation of quantum mechanics 100 years ago.)

Today, research in quantum science and engineering produces a steady stream of “firsts” in the lab and a growing number of startup companies.

In collaboration with partners in industry and government, MIT researchers develop advances in areas like quantum sensing, which involves the use of atomic-scale systems to measure certain properties, like distance and acceleration, with extreme precision. Quantum sensing could be used in applications like brain imaging devices that capture more detail, or air traffic control systems with greater positional accuracy.

Another key area of research is quantum simulation, which uses the power of quantum computers to accurately emulate complex systems. This could fuel the discovery of new materials for energy-efficient electronics or streamline the identification of promising molecules for drug development.

“Historically, when we think about the most well-articulated challenges that quantum will solve,” Freedman says, “the best ones have come from inside of MIT. We’re open to technological solutions to problems, and nontraditional approaches to science. In many respects, we are the early adopters.”

But she also draws a sharp distinction between blue-sky thinking about what quantum might do, and the deeply technical, deeply collaborative work of actually drawing the roadmap. “That’s the ‘MIT-hard’ problem,” she says.

The QMIT launch event on Dec. 8 will feature talks and discussions featuring MIT faculty, including Nobel laureates and industry leaders.


MIT physicists observe key evidence of unconventional superconductivity in magic-angle graphene

The findings could open a route to new forms of higher-temperature superconductors.


Superconductors are like the express trains in a metro system. Any electricity that “boards” a superconducting material can zip through it without stopping and losing energy along the way. As such, superconductors are extremely energy efficient, and are used today to power a variety of applications, from MRI machines to particle accelerators.

But these “conventional” superconductors are somewhat limited in terms of uses because they must be brought down to ultra-low temperatures using elaborate cooling systems to keep them in their superconducting state. If superconductors could work at higher, room-like temperatures, they would enable a new world of technologies, from zero-energy-loss power cables and electricity grids to practical quantum computing systems. And so scientists at MIT and elsewhere are studying “unconventional” superconductors — materials that exhibit superconductivity in ways that are different from, and potentially more promising than, today’s superconductors.

In a promising breakthrough, MIT physicists have today reported their observation of new key evidence of unconventional superconductivity in “magic-angle” twisted tri-layer graphene (MATTG) — a material that is made by stacking three atomically-thin sheets of graphene at a specific angle, or twist, that then allows exotic properties to emerge.

MATTG has shown indirect hints of unconventional superconductivity and other strange electronic behavior in the past. The new discovery, reported in the journal Science, offers the most direct confirmation yet that the material exhibits unconventional superconductivity.

In particular, the team was able to measure MATTG’s superconducting gap — a property that describes how resilient a material’s superconducting state is at given temperatures. They found that MATTG’s superconducting gap looks very different from that of the typical superconductor, meaning that the mechanism by which the material becomes superconductive must also be different, and unconventional.

“There are many different mechanisms that can lead to superconductivity in materials,” says study co-lead author Shuwen Sun, a graduate student in MIT’s Department of Physics. “The superconducting gap gives us a clue to what kind of mechanism can lead to things like room-temperature superconductors that will eventually benefit human society.”

The researchers made their discovery using a new experimental platform that allows them to essentially “watch” the superconducting gap, as the superconductivity emerges in two-dimensional materials, in real-time. They plan to apply the platform to further probe MATTG, and to map the superconducting gap in other 2D materials — an effort that could reveal promising candidates for future technologies.

“Understanding one unconventional superconductor very well may trigger our understanding of the rest,” says Pablo Jarillo-Herrero, the Cecil and Ida Green Professor of Physics at MIT and a member of the Research Laboratory of Electronics. “This understanding may guide the design of superconductors that work at room temperature, for example, which is sort of the Holy Grail of the entire field.”

The study’s other co-lead author is Jeong Min Park PhD ’24; Kenji Watanabe and Takashi Taniguchi of the National Institute for Materials Science in Japan are also co-authors.

The ties that bind

Graphene is a material that comprises a single layer of carbon atoms that are linked in a hexagonal pattern resembling chicken wire. A sheet of graphene can be isolated by carefully exfoliating an atom-thin flake from a block of graphite (the same stuff of pencil lead). In the 2010s, theorists predicted that if two graphene layers were stacked at a very special angle, the resulting structure should be capable of exotic electronic behavior.

In 2018, Jarillo-Herrero and his colleagues became the first to produce magic-angle graphene in experiments, and to observe some of its extraordinary properties. That discovery sprouted an entire new field known as “twistronics,” and the study of atomically thin, precisely twisted materials. Jarillo-Herrero’s group has since studied other configurations of magic-angle graphene with two, three, and more layers, as well as stacked and twisted structures of other two-dimensional materials. Their work, along with other groups, have revealed some signatures of unconventional superconductivity in some structures.

Superconductivity is a state that a material can exhibit under certain conditions (usually at very low temperatures). When a material is a superconductor, any electrons that pass through can pair up, rather than repelling and scattering away. When they couple up in what is known as “Cooper pairs,” the electrons can glide through a material without friction, instead of knocking against each other and flying away as lost energy. This pairing up of electrons is what enables superconductivity, though the way in which they are bound can vary.

“In conventional superconductors, the electrons in these pairs are very far away from each other, and weakly bound,” says Park. “But in magic-angle graphene, we could already see signatures that these pairs are very tightly bound, almost like a molecule. There were hints that there is something very different about this material.”

Tunneling through

In their new study, Jarillo-Herrero and his colleagues aimed to directly observe and confirm unconventional superconductivity in a magic-angle graphene structure. To do so, they would have to measure the material’s superconducting gap.

“When a material becomes superconducting, electrons move together as pairs rather than individually, and there’s an energy ‘gap’ that reflects how they’re bound,” Park explains. “The shape and symmetry of that gap tells us the underlying nature of the superconductivity.”

Scientists have measured the superconducting gap in materials using specialized techniques, such as tunneling spectroscopy. The technique takes advantage of a quantum mechanical property known as “tunneling.” At the quantum scale, an electron behaves not just as a particle, but also as a wave; as such, its wave-like properties enable an electron to travel, or “tunnel,” through a material, as if it could move through walls.

Such tunneling spectroscopy measurements can give an idea of how easy it is for an electron to tunnel into a material, and in some sense, how tightly packed and bound the electrons in the material are. When performed in a superconducting state, it can reflect the properties of the superconducting gap. However, tunneling spectroscopy alone cannot always tell whether the material is, in fact, in a superconducting state. Directly linking a tunneling signal to a genuine superconducting gap is both essential and experimentally challenging.

In their new work, Park and her colleagues developed an experimental platform that combines electron tunneling with electrical transport — a technique that is used to gauge a material’s superconductivity, by sending current through and continuously measuring its electrical resistance (zero resistance signals that a material is in a superconducting state).

The team applied the new platform to measure the superconducting gap in MATTG. By combining tunneling and transport measurements in the same device, they could unambiguously identify the superconducting tunneling gap, one that appeared only when the material exhibited zero electrical resistance, which is the hallmark of superconductivity. They then tracked how this gap evolved under varying temperature and magnetic fields. Remarkably, the gap displayed a distinct V-shaped profile, which was clearly different from the flat and uniform shape of conventional superconductors.

This V shape reflects a certain unconventional mechanism by which electrons in MATTG pair up to superconduct. Exactly what that mechanism is remains unknown. But the fact that the shape of the superconducting gap in MATTG stands out from that of the typical superconductor provides key evidence that the material is an unconventional superconductor.

In conventional superconductors, electrons pair up through vibrations of the surrounding atomic lattice, which effectively jostle the particles together. But Park suspects that a different mechanism could be at work in MATTG.

“In this magic-angle graphene system, there are theories explaining that the pairing likely arises from strong electronic interactions rather than lattice vibrations,” she posits. “That means electrons themselves help each other pair up, forming a superconducting state with special symmetry.”

Going forward, the team will test other two-dimensional twisted structures and materials using the new experimental platform.

“This allows us to both identify and study the underlying electronic structures of superconductivity and other quantum phases as they happen, within the same sample,” Park says. “This direct view can reveal how electrons pair and compete with other states, paving the way to design and control new superconductors and quantum materials that could one day power more efficient technologies or quantum computers.”

This research was supported, in part, by the U.S. Army Research Office, the U.S. Air Force Office of Scientific Research, the MIT/MTL Samsung Semiconductor Research Fund, the Sagol WIS-MIT Bridge Program, the National Science Foundation, the Gordon and Betty Moore Foundation, and the Ramon Areces Foundation.


MIT researchers invent new human brain model to enable disease research, drug discovery

Cultured from induced pluripotent stem cells, “miBrains” integrate all major brain cell types and model brain structures, cellular interactions, activity, and pathological features.


A new 3D human brain tissue platform developed by MIT researchers is the first to integrate all major brain cell types, including neurons, glial cells, and the vasculature, into a single culture. 

Grown from individual donors’ induced pluripotent stem cells, these models — dubbed Multicellular Integrated Brains (miBrains) — replicate key features and functions of human brain tissue, are readily customizable through gene editing, and can be produced in quantities that support large-scale research.

Although each unit is smaller than a dime, miBrains may be worth a great deal to researchers and drug developers who need more complex living lab models to better understand brain biology and treat diseases.

“The miBrain is the only in vitro system that contains all six major cell types that are present in the human brain,” says Li-Huei Tsai, Picower Professor, director of The Picower Institute for Learning and Memory, and a senior author of the open-access study describing miBrains, published Oct. 17 in the Proceedings of the National Academy of Sciences.

“In their first application, miBrains enabled us to discover how one of the most common genetic markers for Alzheimer’s disease alters cells’ interactions to produce pathology,” she adds.

Tsai’s co-senior authors are Robert Langer, David H. Koch (1962) Institute Professor, and Joel Blanchard, associate professor in the Icahn School of Medicine at Mt. Sinai in New York, and a former Tsai Laboratory postdoc. The study is led by Alice Stanton, former postdoc in the Langer and Tsai labs and now assistant professor at Harvard Medical School and Massachusetts General Hospital, and Adele Bubnys, a former Tsai lab postdoc and current senior scientist at Arbor Biotechnologies.

Benefits from two kinds of models

The more closely a model recapitulates the brain’s complexity, the better suited it is for extrapolating how human biology works and how potential therapies may affect patients. In the brain, neurons interact with each other and with various helper cells, all of which are arranged in a three-dimensional tissue environment that includes blood vessels and other components. All of these interactions are necessary for health, and any of them can contribute to disease.

Simple cultures of just one or a few cell types can be created in quantity relatively easily and quickly, but they cannot tell researchers about the myriad interactions that are essential to understanding health or disease. Animal models embody the brain’s complexity, but can be difficult and expensive to maintain, slow to yield results, and different enough from humans to yield occasionally divergent results.

MiBrains combine advantages from each type of model, retaining much of the accessibility and speed of lab-cultured cell lines while allowing researchers to obtain results that more closely reflect the complex biology of human brain tissue. Moreover, they are derived from individual patients, making them personalized to an individual’s genome. In the model, the six cell types self-assemble into functioning units, including blood vessels, immune defenses, and nerve signal conduction, among other features. Researchers ensured that miBrains also possess a blood-brain-barrier capable of gatekeeping which substances may enter the brain, including most traditional drugs.

“The miBrain is very exciting as a scientific achievement,” says Langer. “Recent trends toward minimizing the use of animal models in drug development could make systems like this one increasingly important tools for discovering and developing new human drug targets.”

Two ideal blends for functional brain models

Designing a model integrating so many cell types presented challenges that required many years to overcome. Among the most crucial was identifying a substrate able to provide physical structure for cells and support their viability. The research team drew inspiration from the environment that surrounds cells in natural tissue, the extracellular matrix (ECM). The miBrain’s hydrogel-based “neuromatrix” mimics the brain’s ECM with a custom blend of polysaccharides, proteoglycans, and basement membrane that provide a scaffold for all the brain’s major cell types while promoting the development of functional neurons.

A second blend would also prove critical: the proportion of cells that would result in functional neurovascular units. The actual ratios of cell types have been a matter of debate for the last several decades, with even the more advanced methodologies providing only rough brushstrokes for guidance, for example 45-75 percent for oligodendroglia of all cells or 19-40 percent for astrocytes.

The researchers developed the six cell types from patient-donated induced pluripotent stem cells, verifying that each cultured cell type closely recreated naturally-occurring brain cells. Then, the team experimentally iterated until they hit on a balance of cell types that resulted in functional, properly structured neurovascular units. This laborious process would turn out to be an advantageous feature of miBrains: because cell types are cultured separately, they can each be genetically edited so that the resulting model is tailored to replicate specific health and disease states.

“Its highly modular design sets the miBrain apart, offering precise control over cellular inputs, genetic backgrounds, and sensors — useful features for applications such as disease modeling and drug testing,” says Stanton.

Alzheimer’s discovery using miBrain

To test miBrain’s capabilities, the researchers embarked on a study of the gene variant APOE4, which is the strongest genetic predictor for the development of Alzheimer’s disease. Although one brain cell type, astrocytes, are known to be a primary producer of the APOE protein, the role that astrocytes carrying the APOE4 variant play in disease pathology is poorly understood.

MiBrains were well-suited to the task for two reasons. First of all, they integrate astrocytes with the brain’s other cell types, so that their natural interactions with other cells can be mimicked. Second, because the platform allowed the team to integrate cell types individually, APOE4 astrocytes could be studied in cultures where all other cell types carried APOE3, a gene variant that does not increase Alzheimer’s risk. This enabled the researchers to isolate the contribution APOE4 astrocytes make to pathology.

In one experiment, the researchers examined APOE4 astrocytes cultured alone, versus ones in APOE4 miBrains. They found that only in the miBrains did the astrocytes express many measures of immune reactivity associated with Alzheimer’s disease, suggesting the multicellular environment contributes to that state.

The researchers also tracked the Alzheimer’s-associated proteins amyloid and phosphorylated tau, and found all-APOE4 miBrains accumulated them, whereas all-APOE3 miBrains did not, as expected. However, in APOE3 miBrains with APOE4 astrocytes, they found that APOE4 miBrains still exhibited amyloid and tau accumulation.

Then the team dug deeper into how APOE4 astrocytes’ interactions with other cell types might lead to their contribution to disease pathology. Prior studies have implicated molecular cross-talk with the brain’s microglia immune cells. Notably, when the researchers cultured APOE4 miBrains without microglia, their production of phosphorylated tau was significantly reduced. When the researchers dosed APOE4 miBrains with culture media from astrocytes and microglia combined, phosphorylated tau increased, whereas when they dosed them with media from cultures of astrocytes or microglia alone, the tau production did not increase. The results therefore provided new evidence that molecular cross-talk between microglia and astrocytes is indeed required for phosphorylated tau pathology.

In the future, the research team plans to add new features to miBrains to more closely model characteristics of working brains, such as leveraging microfluidics to add flow through blood vessels, or single-cell RNA sequencing methods to improve profiling of neurons.

Researchers expect that miBrains could advance research discoveries and treatment modalities for Alzheimer’s disease and beyond. 

“Given its sophistication and modularity, there are limitless future directions,” says Stanton. “Among them, we would like to harness it to gain new insights into disease targets, advanced readouts of therapeutic efficacy, and optimization of drug delivery vehicles.”

“I’m most excited by the possibility to create individualized miBrains for different individuals,” adds Tsai. “This promises to pave the way for developing personalized medicine.”

Funding for the study came from the BT Charitable Foundation, Freedom Together Foundation, the Robert A. and Renee E. Belfer Family, Lester A. Gimpelson, Eduardo Eurnekian, Kathleen and Miguel Octavio, David B. Emmes, the Halis Family, the Picower Institute, and an anonymous donor.


A new way to understand and predict gene splicing

The KATMAP model, developed by researchers in the Department of Biology, can predict alternative cell splicing, which allows cells to create endless diversity from the same sets of genetic blueprints.


Although heart cells and skin cells contain identical instructions for creating proteins encoded in their DNA, they’re able to fill such disparate niches because molecular machinery can cut out and stitch together different segments of those instructions to create endlessly unique combinations.

The ingenuity of using the same genes in different ways is made possible by a process called splicing and is controlled by splicing factors; which splicing factors a cell employs determines what sets of instructions that cell produces, which, in turn, gives rise to proteins that allow cells to fulfill different functions. 

In an open-access paper published today in Nature Biotechnology, researchers in the MIT Department of Biology outlined a framework for parsing the complex relationship between sequences and splicing regulation to investigate the regulatory activities of splicing factors, creating models that can be applied to interpret and predict splicing regulation across different cell types, and even different species. Called Knockdown Activity and Target Models from Additive regression Predictions, KATMAP draws on experimental data from disrupting the expression of a splicing factor and information on which sequences the splicing factor interacts with to predict its likely targets. 

Aside from the benefits of a better understanding of gene regulation, splicing mutations — either in the gene that is spliced or in the splicing factor itself — can give rise to diseases such as cancer by altering how genes are expressed, leading to the creation or accumulation of faulty or mutated proteins. This information is critical for developing therapeutic treatments for those diseases. The researchers also demonstrated that KATMAP can potentially be used to predict whether synthetic nucleic acids, a promising treatment option for disorders including a subset of muscular atrophy and epilepsy disorders, affect splicing.

Perturbing splicing 

In eukaryotic cells, including our own, splicing occurs after DNA is transcribed to produce an RNA copy of a gene, which contains both coding and non-coding regions of RNA. The noncoding intron regions are removed, and the coding exon segments are spliced back together to make a near-final blueprint, which can then be translated into a protein. 

According to first author Michael P. McGurk, a postdoc in the lab of MIT Professor Christopher Burge, previous approaches could provide an average picture of regulation, but could not necessarily predict the regulation of splicing factors at particular exons in particular genes.

KATMAP draws on RNA sequencing data generated from perturbation experiments, which alter the expression level of a regulatory factor by either overexpressing it or knocking down its levels. The consequences of overexpression or knockdown are that the genes regulated by the splicing factor should exhibit different levels of splicing after perturbation, which helps the model identify the splicing factor’s targets. 

Cells, however, are complex, interconnected systems, where one small change can cause a cascade of effects. KATMAP is also able to distinguish between direct targets from indirect, downstream impacts by incorporating known information about the sequence the splicing factor is likely to interact with, referred to as a binding site or binding motif.

“In our analyses, we identify predicted targets as exons that have binding sites for this particular factor in the regions where this model thinks they need to be to impact regulation,” McGurk says, while non-targets may be affected by perturbation but don’t have the likely appropriate binding sites nearby. 

This is especially helpful for splicing factors that aren’t as well-studied. 

“One of our goals with KATMAP was to try to make the model general enough that it can learn what it needs to assume for particular factors, like how similar the binding site has to be to the known motif or how regulatory activity changes with the distance of the binding sites from the splice sites,” McGurk says. 

Starting simple

Although predictive models can be very powerful at presenting possible hypotheses, many are considered “black boxes,” meaning the rationale that gives rise to their conclusions is unclear. KATMAP, on the other hand, is an interpretable model that enables researchers to quickly generate hypotheses and interpret splicing patterns in terms of regulatory factors while also understanding how the predictions were made. 

“I don’t just want to predict things, I want to explain and understand,” McGurk says. “We set up the model to learn from existing information about splicing and binding, which gives us biologically interpretable parameters.” 

The researchers did have to make some simplifying assumptions in order to develop the model. KATMAP considers only one splicing factor at a time, although it is possible for splicing factors to work in concert with one another. The RNA target sequence could also be folded in such a way that the factor wouldn’t be able to access a predicted binding site, so the site is present but not utilized.

“When you try to build up complete pictures of complex phenomena, it’s usually best to start simple,” McGurk says. “A model that only considers one splicing factor at a time is a good starting point.” 

David McWaters, another postdoc in the Burge Lab and a co-author on the paper, conducted key experiments to test and validate that aspect of the KATMAP model.

Future directions

The Burge lab is collaborating with researchers at Dana-Farber Cancer Institute to apply KATMAP to the question of how splicing factors are altered in disease contexts, as well as with other researchers at MIT as part of an MIT HEALS grant to model splicing factor changes in stress responses. McGurk also hopes to extend the model to incorporate cooperative regulation for splicing factors that work together. 

“We’re still in a very exploratory phase, but I would like to be able to apply these models to try to understand splicing regulation in disease or development. In terms of variation of splicing factors, they are related, and we need to understand both,” McGurk says.

Burge, the Uncas (1923) and Helen Whitaker Professor and senior author of the paper, will continue to work on generalizing this approach to build interpretable models for other aspects of gene regulation.

“We now have a tool that can learn the pattern of activity of a splicing factor from types of data that can be readily generated for any factor of interest,” says Burge, who is also an extra-mural member of the Koch Institute for Integrative Cancer Research and an associate member of the Broad Institute of MIT and Harvard. “As we build up more of these models, we’ll be better able to infer which splicing factors have altered activity in a disease state from transcriptomic data, to help understand which splicing factors are driving pathology.”


Startup provides a nontechnical gateway to coding on quantum computers

Co-founded by Kanav Setia and Jason Necaise ’20, qBraid lets users access the most popular quantum devices and software programs on an intuitive, cloud-based platform.


Quantum computers have the potential to model new molecules and weather patterns better than any computer today. They may also one day accelerate artificial intelligence algorithms at a much lower energy footprint. But anyone interested in using quantum computers faces a steep learning curve that starts with getting access to quantum devices and then figuring out one of the many quantum software programs on the market.

Now qBraid, founded by Kanav Setia and Jason Necaise ’20, is providing a gateway to quantum computing with a platform that gives users access to the leading quantum devices and software. Users can log on to qBraid’s cloud-based interface and connect with quantum devices and other computing resources from leading companies like Nvidia, Microsoft, and IBM. In a few clicks, they can start coding or deploy cutting-edge software that works across devices.

“The mission is to take you from not knowing anything about quantum computing to running your first program on these amazing machines in less than 10 minutes,” Setia says. “We’re a one-stop platform that gives access to everything the quantum ecosystem has to offer. Our goal is to enable anyone — whether they’re enterprise customers, academics, or individual users — to build and ultimately deploy applications.”

Since its founding in June of 2020, qBraid has helped more than 20,000 people in more than 120 countries deploy code on quantum devices. That traction is ultimately helping to drive innovation in a nascent industry that’s expected to play a key role in our future.

“This lowers the barrier to entry for a lot of newcomers,” Setia says. “They can be up and running in a few minutes instead of a few weeks. That’s why we’ve gotten so much adoption around the world. We’re one of the most popular platforms for accessing quantum software and hardware.”

A quantum “software sandbox”

Setia met Necaise while the two interned at IBM. At the time, Necaise was an undergraduate at MIT majoring in physics, while Setia was at Dartmouth College. The two enjoyed working together, and Necaise said if Setia ever started a company, he’d be interested in joining.

A few months later, Setia decided to take him up on the offer. At Dartmouth, Setia had taken one of the first applied quantum computing classes, but students spent weeks struggling to install all the necessary software programs before they could even start coding.

“We hadn’t even gotten close to developing any useful algorithms,” Seita said. “The idea for qBraid was, ‘Why don’t we build a software sandbox in the cloud and give people an easy programming setup out of the box?’ Connection with the hardware would already be done.”

The founders received early support from the MIT Sandbox Innovation Fund and took part in the delta v summer startup accelerator run by the Martin Trust Center for MIT Entrepreneurship.

“Both programs provided us with very strong mentorship,” Setia says. “They give you frameworks on what a startup should look like, and they bring in some of the smartest people in the world to mentor you — people you’d never have access to otherwise.”

Necaise left the company in 2021. Setia, meanwhile, continued to find problems with quantum software outside of the classroom.

“This is a massive bottleneck,” Setia says. “I’d worked on several quantum software programs that pushed out updates or changes, and suddenly all hell broke loose on my codebase. I’d spend two to four weeks jostling with these updates that had almost nothing to do with the quantum algorithms I was working on.”

QBraid started as a platform with pre-installed software that let developers start writing code immediately. The company also added support for version-controlled quantum software so developers could build applications on top without worrying about changes. Over time, qBraid added connections to quantum computers and tools that lets quantum programs run across different devices.

“The pitch was you don’t need to manage a bunch of software or a whole bunch of cloud accounts,” Setia says. “We’re a single platform: the quantum cloud.”

QBraid also launched qBook, a learning platform that offers interactive courses in quantum computing.

“If you see a piece of code you like, you just click play and the code runs,” Setia says. “You can run a whole bunch of code, modify it on the fly, and you can understand how it works. It runs on laptops, iPads, and phones. A significant portion of our users are from developing countries, and they’re developing applications from their phones.”

Democratizing quantum computing

Today qBraid’s 20,000 users come from over 400 universities and 100 companies around the world. As qBraid’s user base has grown, the company went from integrating quantum computers onto their platform from the outside to creating a quantum operating system, qBraid-OS, that is currently being used by four leading quantum companies.

“We are productizing these quantum computers,” Setia explains. “Many quantum companies are realizing they want to focus their energy completely on the hardware, with us productizing their infrastructure. We’re like the operating system for quantum computers.”

People are using qBraid to build quantum applications in AI and machine learning, to discover new molecules or develop new drugs, and to develop applications in finance and cybersecurity. With every new use case, Setia says qBraid is democratizing quantum computing to create the quantum workforce that will continue to advance the field.

“[In 2018], an article in The New York Times said there were possibly less than 1,000 people in the world that could be called experts in quantum programming,” Setia says. “A lot of people want to access these cutting-edge machines, but they don’t have the right software backgrounds. They are just getting started and want to play with algorithms. QBraid gives those people an easy programming setup out of the box.”


Q&A: How MITHIC is fostering a culture of collaboration at MIT

A presidential initiative, the MIT Human Insight Collaborative is supporting new interdisciplinary initiatives and projects across the Institute.


The MIT Human Insight Collaborative (MITHIC) is a presidential initiative with a mission of elevating human-centered research and teaching and connecting scholars in the humanities, arts, and social sciences with colleagues across the Institute.

Since its launch in 2024, MITHIC has funded 31 projects led by teaching and research staff representing 22 different units across MIT. The collaborative is holding its annual event on Nov. 17. 

In this Q&A, Keeril Makan, associate dean in the MIT School of Humanities, Arts, and Social Sciences, and Maria Yang, interim dean of the MIT School of Engineering, discuss the value of MITHIC and the ways it’s accelerating new research and collaborations across the Institute. Makan is the Michael (1949) Sonja Koerner Music Composition Professor and faculty lead for MITHIC. Yang is the William E. Leonhard (1940) Professor in the Department of Mechanical Engineering and co-chair of MITHIC’s SHASS+ Connectivity Fund.

Q: You each come from different areas of MIT. Looking at MITHIC from your respective roles, why is this initiative so important for the Institute?

Makan: The world is counting on MIT to develop solutions to some of the world’s greatest challenges, such as artificial intelligence, poverty, and health care. These are all issues that arise from human activity, a thread that runs through much of the research we’re focused on in SHASS. Through MITHIC, we’re embedding human-centered thinking and connecting the Institute’s top scholars in the work needed to find innovative ways of addressing these problems.

Yang: MITHIC is very important to MIT, and I think of this from the point of view as an engineer, which is my background. Engineers often think about the technology first, which is absolutely important. But for that technology to have real impact, you have to think about the human insights that make that technology relevant and can be deployed in the world. So really having a deep understanding of that is core to MITHIC and MIT’s engineering enterprise.

Q: How does MITHIC fit into MIT’s broader mission?

Makan: MITHIC highlights how the work we do in the School of Humanities, Arts, and Social Sciences is aligned with MIT’s mission, which is to address the world’s great problems. But MITHIC has also connected all of MIT in this endeavor. We have faculty from all five schools and the MIT Schwarzman College of Computing involved in evaluating MITHIC project proposals. Each of them represent a different point of view and are engaging with these projects that originate in SHASS, but actually cut across many different fields. Seeing their perspectives on these projects has been inspiring.

Yang: I think of MIT’s main mission as using technology and many other things to make impact in the world, especially social impact. The kind of interdisciplinary work that MITHIC catalyzes really enables all of that work to happen in a new and profound way. The SHASS+ Connectivity Fund, which connects SHASS faculty and researchers with colleagues outside of SHASS, has resulted in collaborations that were not possible before. One example is a project being led by professors Mark Rau, who has a shared appointment between Music and Electrical Engineering and Computer Science, and Antoine Allanore in Materials Science and Engineering. The two of them are looking at how they can take ancient unplayable instruments and recreate them using new technologies for scanning and fabrication. They’re also working with the Museum of Fine Arts, so it’s a whole new type of collaboration that exemplifies MITHIC.

Q: What has been the community response to MITHIC in its first year?

Makan: It’s been very strong. We found a lot of pent-up demand, both from faculty in SHASS and faculty in the sciences and engineering. Either there were preexisting collaborations that they could take to the next level through MITHIC, or there was the opportunity to meet someone new and talk to someone about a problem and how they could collaborate. MITHIC also hosted a series of Meeting of the Minds events, which are a chance to have faculty and members of the community get to know one another on a certain topic. This community building has been exciting, and led to an overwhelming number of applications last year. There has also been significant student involvement, with several projects bringing on UROPs [Undergraduate Research Opportunities Program projects] and PhD students to help with their research. MITHIC gives a real morale boost and a lot of hope that there is a focus upon building collaborations at MIT and on not forgetting that the world needs humanists, artists, and social scientists.

Yang: One faculty member told me the SHASS+ Connectivity Fund has given them hope for the kind of research that we do because of the cross collaboration. There’s a lot of excitement and enthusiasm for this type of work.

Q: The SHASS+ Connectivity Fund is designed to support interdisciplinary collaborations at MIT. What’s an example of a SHASS+ project that’s worked particularly well?

Makan: One exciting collaboration is between professors Jörn Dunkel in Mathematics and In Song Kim in Political science. In Song is someone who has done a lot of work on studying lobbying and its effect upon the legislative process. He met Jörn, I believe, at one of MIT’s daycare centers, so it’s a relationship that started in a very informal fashion. But they found they actually had ways of looking at math and quantitative analysis that could complement one another. Their work is creating a new subfield and taking the research in a direction that would not be possible without this funding.

Yang: One of the SHASS+ projects that I think is really interesting is between professors Marzyeh Ghassemi in Electrical Engineering and Computer Science and Esther Duflo in Economics. The two of them are looking at how they can use AI to help health diagnostics in low-resource global settings, where there isn’t a lot of equipment or technology to do basic health diagnostics. They can use handheld, low-cost equipment to do things like predict if someone is going to have a heart attack. And they are not only developing the diagnostic tool, but evaluating the fairness of the algorithm. The project is an excellent example of using a MITHIC grant to make impact in the world.

Q: What has been MITHIC’s impact in terms of elevating research and teaching within SHASS?

Makan: In addition to the SHASS+ Connectivity Fund, there are two other possibilities to help support both SHASS research as well as educational initiatives: the Humanities Cultivation Fund and the SHASS Education Innovation Fund. And both of these are providing funding in excess of what we normally see within SHASS. It both recognizes the importance of the work of our faculty and it also gives them the means to actually take ideas to a much further place.

One of the projects that MITHIC is helping to support is the Compass Initiative. Compass was started by Lily Tsai, one of our professors in Political Science, along with other faculty in SHASS to create essentially an introductory class to the different methodologies within SHASS. So we have philosophers, music historians, etc., all teaching together, all addressing how we interact with one another, what it means to be a good citizen, what it means to be socially aware and civically engaged. This is a class that is very timely for MIT and for the world. And we were able to give it robust funding so they can take this and develop it even further.

MITHIC has also been able to take local initiatives in SHASS and elevate them. There has been a group of anthropologists, historians, and urban planners that have been working together on a project called the Living Climate Futures Lab. This is a group interested in working with frontline communities around climate change and sustainability. They work to build trust with local communities and start to work with them on thinking about how climate change affects them and what solutions might look like. This is a powerful and uniquely SHASS approach to climate change, and through MITHIC, we’re able to take this seed effort, robustly fund it, and help connect it to the larger climate project at MIT.

Q: What excites you most about the future of MITHIC at MIT?

Yang: We have a lot of MIT efforts that are trying to break people out of their disciplinary silos, and MITHIC really is a big push on that front. It’s a presidential initiative, so it’s high on the priority list of what people are thinking about. We’ve already done our first round, and the second round is going to be even more exciting, so it’s only going to gain in force. In SHASS+, we’re actually having two calls for proposals this academic year instead of just one. I feel like there’s still so much possibility to bring together interdisciplinary research across the Institute.

Makan: I’m excited about how MITHIC is changing the culture of MIT. MIT thinks of itself in terms of engineering, science, and technology, and this is an opportunity to think about those STEM fields within the context of human activity and humanistic thinking. Having this shift at MIT in how we approach solving problems bodes well for the world, and it places SHASS as this connective tissue at the Institute. It connects the schools and it can also connect the other initiatives, such as manufacturing and health and life sciences. There’s an opportunity for MITHIC to seed all these other initiatives with the work that goes on in SHASS.


Study: Identifying kids who need help learning to read isn’t as easy as A, B, C

While most states mandate screenings to guide early interventions for children struggling with reading, many teachers feel underprepared to administer and interpret them.


In most states, schools are required to screen students as they enter kindergarten — a process that is meant to identify students who may need extra help learning to read. However, a new study by MIT researchers suggests that these screenings may not be working as intended in all schools.

The researchers’ survey of about 250 teachers found that many felt they did not receive adequate training to perform the tests, and about half reported that they were not confident that children who need extra instruction in reading end up receiving it.

When performed successfully, these screens can be essential tools to make sure children get the extra help they need to learn to read. However, the new findings suggest that many school districts may need to tweak how they implement the screenings and analyze the results, the researchers say.

“This result demonstrates the need to have a systematic approach for how the basic science on how children learn to read is translated into educational opportunity,” says John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology, a professor of brain and cognitive sciences, and a member of MIT’s McGovern Institute for Brain Research.

Gabrieli is the senior author of the new open-access study, which appears today in Annals of Dyslexia. Ola Ozernov-Palchik, an MIT research scientist who is also a research assistant professor at Boston University Wheelock College of Education and Human Development, is the lead author of the study.

Boosting literacy

Over the past 20 years, national reading proficiency scores in the United States have trended up, but only slightly. In 2022, 33 percent of fourth-graders achieved reading proficiency, compared to 29 percent in 1992, according to the National Assessment of Educational Progress reading report card. (The highest level achieved in the past 20 years was 37 percent, in 2017.)

In hopes of boosting those rates, most states have passed laws requiring students to be screened for potential reading struggles early in elementary school. In most cases, the screenings are required two or three times per year, in kindergarten, first grade, and second grade.

These tests are designed to identify students who have difficulty with skills such as identifying letters and the sounds they make, blending sounds to make words, and recognizing words that rhyme. Students with low scores in these measures can then be offered extra interventions designed to help them catch up.

“The indicators of future reading disability or dyslexia are present as early as within the first few months of kindergarten,” Ozernov-Palchik says. “And there’s also an overwhelming body of evidence showing that interventions are most effective in the earliest grades.”

In the new study, the researchers wanted to evaluate how effectively these screenings are being implemented in schools. With help from the National Center for Improving Literacy, they posted on social media sites seeking classroom teachers and reading specialists who are responsible for administering literacy screening tests.

The survey respondents came from 39 states and represented public and private schools, located in urban, suburban, and rural areas. The researchers asked those teachers dozens of questions about their experience with the literacy screenings, including questions about their training, the testing process itself, and the results of the screenings.

One of the significant challenges reported by the respondents was a lack of training. About 75 percent reported that they received fewer than three hours of training on how to perform the screens, and 44 percent received no training at all or less than an hour of training.

“Under ideal conditions, there is an expert who trains the educators, they provide practice opportunities, they provide feedback, and they observe the educators administer the assessment,” Ozernov-Palchik says. “None of this was done in many of the cases.”

Instead, many educators reported that they spent their own time figuring out how to give the evaluations, sometimes working with colleagues. And, new hires who arrived at a school after the initial training was given were often left on their own to figure it out.

Another major challenge was suboptimal conditions for administering the tests. About 80 percent of teachers reported interruptions during the screenings, and 40 percent had to do the screens in noisy locations such as a school hallway. More than half of the teachers also reported technical difficulties in administering the tests, and that rate was higher among teachers who worked at schools with a higher percentage of students from low socioeconomic (SES) backgrounds.

Teachers also reported difficulties when it came to evaluating students categorized as English language learners (ELL). Many teachers relayed that they hadn’t been trained on how to distinguish students who were having trouble reading from those who struggled on the tests because they didn’t speak English well.

“The study reveals that there’s a lot of difficulty understanding how to handle English language learners in the context of screening,” Ozernov-Palchik says. “Overall, those kids tend to be either over-identified or under-identified as needing help, but they’re not getting the support that they need.”

Unrealized potential

Most concerning, the researchers say, is that in many schools, the results of the screening tests are not being used to get students the extra help that they need. Only 44 percent of the teachers surveyed said that their schools had a formal process for creating intervention plans for students after the screening was performed.

“Even though most educators said they believe that screening is important to do, they’re not feeling that it has the potential to drive change the way that it’s currently implemented,” Ozernov-Palchik says.

In the study, the researchers recommended several steps that state legislatures or individual school districts can take to make the screening process run more smoothly and successfully.

“Implementation is the key here,” Ozernov-Palchik says. “Teachers need more support and professional development. There needs to be systematic support as they administer the screening. They need to have designated spaces for screening, and explicit instruction in how to handle children who are English language learners.”

The researchers also recommend that school districts train an individual to take charge of interpreting the screening results and analyzing the data, to make sure that the screenings are leading to improved success in reading.

In addition to advocating for those changes, the researchers are also working on a technology platform that uses artificial intelligence to provide more individualized instruction in reading, which could help students receive help in the areas where they struggle the most.

The research was funded by Schmidt Sciences, the Chan Zuckerberg Initiative for the Reach Every Reader project, and the Halis Family Foundation.


The joy of life (sciences)

Mary Gallagher’s deeply rooted MIT experience and love of all life supports growth at the MIT Department of Biology.


For almost 30 years, Mary Gallagher has supported award-winning faculty members and their labs in the same way she tends the soil beneath her garden. In both, she pairs diligence and experience with a delight in the way that interconnected ecosystems contribute to the growth of a plant, or an idea, seeded in the right place.

Gallagher, a senior administrative assistant in the Department of Biology, has spent much of her career at MIT. Her mastery in navigating the myriad tasks required by administrators, and her ability to build connections, have supported and elevated everyone she interacts with, at the Institute and beyond.

Oh, the people you’ll know

Gallagher didn’t start her career at MIT. Her first role following graduation from the University of Vermont in the early 1980s was at a nearby community arts center, where she worked alongside a man who would become a household name in American politics. 

“This guy had just been elected mayor, shockingly, of Burlington, Vermont, by under 100 votes, unseating the incumbent. He went in and created this arts council and youth office,” Gallagher recalls.

That political newcomer was none other than a young Bernie Sanders, now the longest-serving independent senator in U.S. congressional history. 

Gallagher arrived at MIT in 1996, becoming an administrative assistant (aka “lab admin”) in what was then called the MIT Energy Laboratory. Shortly after her arrival, Cecil and Ida Green Professor of Physics and Engineering Systems Ernest Moniz transformed the laboratory into the MIT Energy Initiative (MITEI).

Gallagher quickly learned how versatile the work of an administrator can be. As MITEI rapidly grew, she interacted with people across campus and its vast array of disciplines at the Institute, including mechanical engineering, political science, and economics. 

“Admin jobs at MIT are really crazy because of the depth of work that we’re willing to do to support the institution. I was hired to do secretarial work, and next thing I know, I was traveling all the time, and planning a five-day, 5,000-person event down in D.C.,” Gallagher says. “I developed crazy computer and event-planner skills.”

Although such tasks may seem daunting to some, Gallagher has been thrilled with the opportunities she’s had to meet so many people and develop so many new skills. As a lab admin in MITEI for 18 years, she mastered navigating MIT administration, lab finances, and technical support. When Moniz left MITEI to lead the U.S. Department of Energy under President Obama, she moved to the Department of Biology at MIT.

Mutual thriving

Over the years, Gallagher has fostered the growth of students and colleagues at MIT, and vice versa. 

Friend and former colleague Samantha Farrell recalls her first days at MITEI as a rather nervous and very "green" temp, when Gallagher offered an excellent cappuccino from Gallagher’s new Nespresso coffee machine. 

“I treasure her friendship and knowledge,” Farrell says. “She taught me everything I needed to know about being an admin and working in research.”

Gallagher’s experience has also set faculty across the Institute up for success. 

According to one principal investigator she currently supports, Novartis Professor of Biology Leonard Guarente, Gallagher is “extremely impactful and, in short, an ideal administrative assistant."

Similarly, professor of biology Daniel Lew is grateful that her extensive MIT experience was available as he moved his lab to the Institute in recent years. “Mary was invaluable in setting up and running the lab, teaching at MIT, and organizing meetings and workshops,” Lew says. “She is a font of knowledge about MIT.”

A willingness to share knowledge, resources, and sometimes a cappuccino, is just as critical as a willingness to learn, especially at a teaching institution like MIT. So it goes without saying that the students at MIT have left their mark on Gallagher in turn — including teaching her how to format a digital table of contents on her very first day at MIT.

“Working with undergrads and grad students is my favorite part of MIT. Their generosity leaves me breathless,” says Gallagher. “No matter how busy they are, they’re always willing to help another person.” 

Campus community

Gallagher cites the decline in community following the Covid-19 pandemic shutdown as one of her most significant challenges. 

Prior to Covid, Gallagher says, “MIT had this great sense of community. Everyone had projects, volunteered, and engaged. The campus was buzzing, it was a hoot!” 

She nurtured that community, from active participation in the MIT Women’s League to organizing an award-winning relaunch of Artist Behind the Desk. This subgroup of the MIT Working Group for Support Staff Issues hosted lunchtime recitals and visual art shows to bring together staff artists around campus, for which the group received a 2005 MIT Excellence Award for Creating Connections.

Moreover, Gallagher is an integral part of the smaller communities within the labs she supports.

Professor of biology and American Cancer Society Professor Graham Walker, yet another Department of Biology faculty member Gallagher supports, says, “Mary’s personal warmth and constant smile has lit up my lab for many years, and we are all grateful to have her as such a good colleague and friend.”

She strives to restore the sense of community that the campus used to have, but recognizes that striving for bygone days is futile.

“You can never go back in time and make the future what it was in the past,” she says. “You have to reimagine how we can make ourselves special in a new way.”

Spreading her roots

Gallagher’s life has been inextricably shaped by the Institute, and MIT, in turn, would not be what it is if not for Gallagher’s willingness to share her wisdom on the complexities of administration alongside the “joie de vivre” of her garden’s butterflies.

She recently bought a home in rural New Hampshire, trading the buzzing crowds of campus for the buzzing of local honeybees. Her work ethic is reflected in her ongoing commitment to curiosity, through reading about native plant life and documenting pollinating insects as they wander about her flowers. 

Just as she can admire each bug and flower for the role it plays in the larger system, Gallagher has participated in and contributed to a culture of appreciating the role of every individual within the whole.

“At MIT’s core, they believe that everybody brings something to the table,” she says. “I wouldn’t be who I am if I didn’t work at MIT and meet all these people.”


Astronomical data collection of Taurus Molecular Cloud-1 reveals over 100 different molecules

The discovery will help researchers understand how chemicals form and change before stars and planets are born.


MIT researchers recently studied a region of space called the Taurus Molecular Cloud-1 (TMC-1) and discovered more than 100 different molecules floating in the gas there — more than in any other known interstellar cloud. They used powerful radio telescopes capable of detecting very faint signals across a wide range of wavelengths in the electromagnetic spectrum.

With over 1,400 observing hours on the Green Bank Telescope (GBT) — the world’s largest fully steerable radio telescope, located in West Virginia — researchers in the group of Brett McGuire collected the astronomical data needed to search for molecules in deep space and have made the full dataset publicly available. From these observations, published in The Astrophysical Journal Supplement Series (ApJS), the team censused 102 molecules in TMC-1, a cold interstellar cloud where sunlike stars are born. Most of these molecules are hydrocarbons (made only of carbon and hydrogen) and nitrogen-rich compounds, in contrast to the oxygen-rich molecules found around forming stars. Notably, they also detected 10 aromatic molecules (ring-shaped carbon structures), which make up a small but significant fraction of the carbon in the cloud.

“This project represents the single largest amount of telescope time for a molecular line survey that has been reduced and publicly released to date, enabling the community to pursue discoveries such as biologically relevant organic matter,” said Ci Xue, a postdoc in the McGuire Group and the project’s principal researcher. “This molecular census offers a new benchmark for the initial chemical conditions for the formation of stars and planets.”

To handle the immense dataset, the researchers built an automated system to organize and analyze the results. Using advanced statistical methods, they determined the amounts of each molecule present, including variations containing slightly different atoms (such as carbon-13 or deuterium).

“The data we’re releasing here are the culmination of more than 1,400 hours of observational time on the GBT, one of the NSF’s premier radio telescopes,” says McGuire, the Class of 1943 Career Development Associate Professor of Chemistry. “In 2021, these data led to the discovery of individual PAH molecules in space for the first time, answering a three-decade-old mystery dating back to the 1980s. In the following years, many more and larger PAHs have been discovered in these data, showing that there is indeed a vast and varied reservoir of this reactive organic carbon present at the earliest stages of star and planet formation. There is still so much more science, and so many new molecular discoveries, to be made with these data, but our team feels strongly that datasets like this should be opened to the scientific community, which is why we’re releasing the fully calibrated, reduced, science-ready product freely for anyone to use.”

Overall, this study provides the single largest publicly released molecular line survey to date, enabling the scientific community to pursue discoveries such as biologically relevant molecules. This molecular census offers a new benchmark for understanding the chemical conditions that exist before stars and planets form.


Support with purpose, driven by empathy

Professors Michael McDonald and Kristala Prather are honored as “Committed to Caring.”


MIT professors Michael McDonald and Kristala Prather embody a form of mentorship defined not only by technical expertise, but by care. They remind us that the most lasting academic guidance is not only about advancing research, but about nurturing their students along the way.

For McDonald’s students, his presence is one of deep empathy and steady support. They describe him as fully committed to their well-being and success — someone whose influence reaches beyond academics to the heart of what it means to feel valued in a community. Prather is celebrated for the way she invests in her mentees beyond formal advising, offering guidance and encouragement that helps them chart paths forward with confidence.

Together, they create spaces where students are affirmed as individuals as well as scholars. 

Professors McDonald and Prather are members of the 2023–25 Committed to Caring cohort, recognized for their dedication to fostering growth, resilience, and belonging across MIT.

Michael McDonald: Empathetic, dedicated, and deeply understanding

Michael McDonald is an associate professor of physics at the MIT Kavli Institute for Astrophysics and Space Research. His research focuses on the evolution of galaxies and clusters of galaxies, and the role that environment plays in dictating this evolution. 

A shining example of an empathetic and caring advisor, McDonald supports his students, fostering an environment where they can overcome challenges and grow with confidence. One of his students says that “if one of his research or class students is progressing slowly or otherwise struggling, he treats them with respect, care, and understanding, enabling them to maintain confidence and succeed.”

McDonald also goes above and beyond in offering help and guidance, never expecting thanks, praise, or commendation. A student expressed, “he does not need to be asked to advocate for students experiencing personal or academic challenges. He does not need to be asked to improve graduate student education and well-being at MIT. He does not need to be asked to care for students who may otherwise be left behind.”

When asked to describe his advising style, McDonald shared the mantra “we’re humans first, scientists second." He models his commitment to this idea, prioritizing balance for himself while also ensuring that his students feel happy and fulfilled. “If I’m not doing well, or am unhappy with my own work/life balance, then I’m not going to be a very good or understanding advisor,” McDonald says.

Students are quick to identify McDonald as a dedicated and deeply understanding teacher and mentor. “Mike was consistently engaging, humble, and kind, both bolstering our love of astrophysics and making us feel welcome and supported,” one advisee commended.

On top of weekly meetings, he conducts separate check-ins with his students on a semesterly basis to track not only their accomplishments and progress toward their personal goals, but also to evaluate his own mentoring and identify areas of improvement.

McDonald “thinks deeply and often about the long-term trajectory of his advisees, how they will fit into the modern research landscape, and helps them to develop professional and personal support networks that will help them succeed and thrive.”

McDonald feels that projects should be so much fun that they do not feel like work. To this end, he spends a lot of time developing and fleshing out a wide variety of research projects. When he takes on a new student, he presents them with five to 10 possible projects that they could lead, and works with them to find the one that is best matched to the student’s interests and abilities. 

“This is a lot of work on my end — and many of these projects never see the light of day — but I think it leads to better outcomes and happier group members,” McDonald says. One of the most impactful qualities in a mentor and supervisor is how they deal with challenges and failures, both their own and those of others, which McDonald does very effectively.

One nominator sums up McDonald’s character, writing that “Michael McDonald fully embodies the spirit of Committed to Caring as a teacher, advisor, counselor, and role model for the MIT community. He consistently impacts the lives of his students, mentees, and the physics community as a whole, encouraging us to be the best versions of ourselves while striving to be a better mentor, father, and friend.”

Kristala Prather: Meaningful support and departmental impact

Kristala Prather is the Arthur Dehon Little Professor of Chemical Engineering and is the head of the Department of Chemical Engineering. Her research involves the design and assembly of novel pathways for biological synthesis, enhancement of enzyme activity and control of metabolic flux, and bioprocess engineering and design.

Prather has proven to be a dedicated mentor and role model for her students, particularly those from underrepresented backgrounds. One nominator mentions that as an immigrant woman of color with no prior exposure to academia before coming to MIT, Prather’s guidance has been extremely important for her. Prather has pointed the nominator to resources that she didn't know existed, and helped her navigate U.S. and academic norms that she was not well-versed in. 

“As an international student navigating two new cultures (that of the U.S. as well as that of academia), it is easy to feel inadequate, confused, frustrated, or undeserving,” the student stated. Prather’s level of mentorship may not be easy to find, and it is extremely important to the success of all students, especially to marginalized students. 

Prather actively listens to her students’ concerns and helps them to identify their areas of academic improvement with regard to their desired career path. She consistently creates a comfortable space for authentic conversations where mentees feel supported both professionally and personally. Through her deep caring, advisees feel a sense of belonging and worthiness in academia.

“I treat everyone fairly, which is not the same as treating everyone the same,” Prather says. This is Prather’s way of acknowledging the reality that each individual comes as a unique person; different people need different advising approaches. The goal is to get everyone to the same endpoint, irrespective of where they start.

In addition to the meaningful support which Prather provides her students, she has also dedicated extra time to mentoring. One nominator explained that Prather has been known to meet with individual students in the department to check in on their progress and help them navigate academia. She also works closely with the Office of Graduate Education to connect students from disadvantaged backgrounds to resources that will help them succeed. In the department, she is known to be a trustworthy and caring mentor. 

Since much of Prather’s mentoring goes beyond her official duties, this work can easily be overlooked. It is clear that she has deliberately dedicated extra time to help students, adding to her numerous commitments and official positions both inside and outside of the department. Through their nominations, students called for the recognition of Prather’s mentorship, stating that it  “has meaningfully impacted so many in the department.”


With a new molecule-based method, physicists peer inside an atom’s nucleus

An alternative to massive particle colliders, the approach could reveal insights into the universe’s starting ingredients.


Physicists at MIT have developed a new way to probe inside an atom’s nucleus, using the atom’s own electrons as “messengers” within a molecule.

In a study appearing today in the journal Science, the physicists precisely measured the energy of electrons whizzing around a radium atom that had been paired with a fluoride atom to make a molecule of radium monofluoride. They used the environments within molecules as a sort of microscopic particle collider, which contained the radium atom’s electrons and encouraged them to briefly penetrate the atom’s nucleus.

Typically, experiments to probe the inside of atomic nuclei involve massive, kilometers-long facilities that accelerate beams of electrons to speeds fast enough to collide with and break apart nuclei. The team’s new molecule-based method offers a table-top alternative to directly probe the inside of an atom’s nucleus.

Within molecules of radium monofluoride, the team measured the energies of a radium atom’s electrons as they pinged around inside the molecule. They discerned a slight energy shift and determined that electrons must have briefly penetrated the radium atom’s nucleus and interacted with its contents. As the electrons winged back out, they retained this energy shift, providing a nuclear “message” that could be analyzed to sense the internal structure of the atom’s nucleus.

The team’s method offers a new way to measure the nuclear “magnetic distribution.” In a nucleus, each proton and neutron acts like a small magnet, and they align differently depending on how the nucleus’ protons and neutrons are spread out. The team plans to apply their method to precisely map this property of the radium nucleus for the first time. What they find could help to answer one of the biggest mysteries in cosmology: Why do we see much more matter than antimatter in the universe?

“Our results lay the groundwork for subsequent studies aiming to measure violations of fundamental symmetries at the nuclear level,” says study co-author Ronald Fernando Garcia Ruiz, who is the Thomas A. Franck Associate Professor of Physics at MIT. “This could provide answers to some of the most pressing questions in modern physics.”

The study’s MIT co-authors include Shane Wilkins, Silviu-Marian Udrescu, and Alex Brinson, along with collaborators from multiple institutions including the Collinear Resonance Ionization Spectroscopy Experiment (CRIS) at CERN in Switzerland, where the experiments were performed.

Molecular trap

According to scientists’ best understanding, there must have been almost equal amounts of matter and antimatter when the universe first came into existence. However, the overwhelming majority of what scientists can measure and observe in the universe is made from matter, whose building blocks are the protons and neutrons within atomic nuclei.

This observation is in stark contrast to what our best theory of nature, the Standard Model, predicts, and it is thought that additional sources of fundamental symmetry violation are required to explain the almost complete absence of antimatter in our universe. Such violations could be seen within the nuclei of certain atoms such as radium.

Unlike most atomic nuclei, which are spherical in shape, the radium atom’s nucleus has a more asymmetrical configuration, similar to a pear. Scientists predict that this pear shape could significantly enhance their ability to sense the violation of fundamental symmetries, to the extent that they may be potentially observable.

“The radium nucleus is predicted to be an amplifier of this symmetry breaking, because its nucleus is asymmetric in charge and mass, which is quite unusual,” says Garcia Ruiz, whose group has focused on developing methods to probe radium nuclei for signs of fundamental symmetry violation.

Peering inside the nucleus of a radium atom to investigate fundamental symmetries is an incredibly tricky exercise.

“Radium is naturally radioactive, with a short lifetime and we can currently only produce radium monofluoride molecules in tiny quantities,” says study lead author Shane Wilkins, a former postdoc at MIT. “We therefore need incredibly sensitive techniques to be able measure them.”

The team realized that by placing a radium atom in a molecule, they could contain and amplify the behavior of its electrons.

“When you put this radioactive atom inside of a molecule, the internal electric field that its electrons experience is orders of magnitude larger compared to the fields we can produce and apply in a lab,” explains Silviu-Marian Udrescu PhD ’24, a study co-author. “In a way, the molecule acts like a giant particle collider and gives us a better chance to probe the radium’s nucleus.”

Energy shift

In their new study, the team first paired radium atoms with fluoride atoms to create molecules of radium monofluoride. They found that in this molecule, the radium atom’s electrons were effectively squeezed, increasing the chance for electrons to interact with and briefly penetrate the radium nucleus.

The team then trapped and cooled the molecules and sent them through a system of vacuum chambers, into which they also sent lasers, which interacted with the molecules. In this way the researchers were able to precisely measure the energies of electrons inside each molecule.

When they tallied the energies, they found that the electrons appeared to have a slightly different energy compared to what physicists expect if they did not penetrate the nucleus. Although this energy shift was small — just a millionth of the energy of the laser photon used to excite the molecules — it gave unambiguous evidence of the molecules’ electrons interacting with the protons and neutrons inside the radium nucleus.

“There are many experiments measuring interactions between nuclei and electrons outside the nucleus, and we know what those interactions look like,” Wilkins explains. “When we went to measure these electron energies very precisely, it didn’t quite add up to what we expected assuming they interacted only outside of the nucleus. That told us the difference must be due to electron interactions inside the nucleus.”

“We now have proof that we can sample inside the nucleus,” Garcia Ruiz says. “It’s like being able to measure a battery’s electric field. People can measure its field outside, but to measure inside the battery is far more challenging. And that’s what we can do now.”

Going forward, the team plans to apply the new technique to map the distribution of forces inside the nucleus. Their experiments have so far involved radium nuclei that sit in random orientations inside each molecule at high temperature. Garcia Ruiz and his collaborators would like to be able to cool these molecules and control the orientations of their pear-shaped nuclei such that they can precisely map their contents and hunt for the violation of fundamental symmetries.

“Radium-containing molecules are predicted to be exceptionally sensitive systems in which to search for violations of the fundamental symmetries of nature,” Garcia Ruiz says. “We now have a way to carry out that search.”

This research was supported, in part, by the U.S. Department of Energy. 


Five with MIT ties elected to National Academy of Medicine for 2025

Professors Facundo Batista and Dina Katabi, along with three additional MIT alumni, are honored for their outstanding professional achievement and commitment to service.


On Oct. 20 during its annual meeting, the National Academy of Medicine announced the election of 100 new members, including MIT faculty members Dina Katabi and Facundo Batista, along with three additional MIT alumni.

Election to the National Academy of Medicine (NAM) is considered one of the highest honors in the fields of health and medicine, recognizing individuals who have demonstrated outstanding professional achievement and commitment to service.

Facundo Batista is the associate director and scientific director of the Ragon Institute of MGH, MIT and Harvard, as well as the first Phillip T. and Susan M. Ragon Professor in the MIT Department of Biology. The National Academy of Medicine recognized Batista for “his work unraveling the biology of antibody-producing B cells to better understand how our body’s immune systems responds to infectious disease.” More recently, Facundo’s research has advanced preclinical vaccine and therapeutic development for globally important diseases including HIV, malaria, and influenza.

Batista earned a PhD from the International School of Advanced Studies and established his lab in 2002 as a member of the Francis Crick Institute (formerly the London Research Institute), simultaneously holding a professorship at Imperial College London. In 2016, he joined the Ragon Institute to pursue a new research program applying his expertise in B cells and antibody responses to vaccine development, and preclinical vaccinology for diseases including SARS-CoV-2 and HIV. Batista is an elected fellow or member of the U.K. Academy of Medical Sciences, the American Academy of Microbiology, the Academia de Ciencias de América Latina, and the European Molecular Biology Organization, and he is chief editor of The EMBO Journal.

Dina Katabi SM ’99, PhD ’03 is the Thuan (1990) and Nicole Pham Professor in the Department of Electrical Engineering and Computer Science at MIT. Her research spans digital health, wireless sensing, mobile computing, machine learning, and computer vision. Katabi’s contributions include efficient communication protocols for the internet, advanced contactless biosensors, and novel AI models that interpret physiological signals. The NAM recognized Katabi for “pioneering digital health technology that enables non-invasive, off-body remote health monitoring via AI and wireless signals, and for developing digital biomarkers for Parkinson’s progression and detection. She has translated this technology to advance objective, sensitive measures of disease trajectory and treatment response in clinical trials.”

Katabi is director of the MIT Center for Wireless Networks and Mobile Computing. She is also a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL), where she leads the Networks at MIT Research Group. Katabi received a bachelor’s degree from the University of Damascus and MS and PhD degrees in computer science from MIT. She is a MacArthur Fellow; a member of the American Academy of Arts and Sciences, National Academy of Sciences, and National Academy of Engineering; and a recipient of the ACM Computing Prize. 

Additional MIT alumni who were elected to the NAM for 2025 are:

Established originally as the Institute of Medicine in 1970 by the National Academy of Sciences, the National Academy of Medicine addresses critical issues in health, science, medicine, and related policy, and inspires positive actions across sectors.

“I am deeply honored to welcome these extraordinary health and medicine leaders and researchers into the National Academy of Medicine,” says NAM President Victor J. Dzau. “Their demonstrated excellence in tackling public health challenges, leading major discoveries, improving health care, advancing health policy, and addressing health equity will critically strengthen our collective ability to tackle the most pressing health challenges of our time.” 


Neural activity helps circuit connections mature into optimal signal transmitters

Scientists identified how circuit connections in fruit flies tune to the right size and degree of signal transmission capability. Understanding this could lead to a way to tweak abnormal signal transmission in certain disorders.


Nervous system functions, from motion to perception to cognition, depend on the active zones of neural circuit connections, or “synapses,” sending out the right amount of their chemical signals at the right times. By tracking how synaptic active zones form and mature in fruit flies, researchers at The Picower Institute for Learning and Memory at MIT have revealed a fundamental model for how neural activity during development builds properly working connections.

Understanding how that happens is important, not only for advancing fundamental knowledge about how nervous systems develop, but also because many disorders such as epilepsy, autism, or intellectual disability can arise from aberrations of synaptic transmission, says senior author Troy Littleton, the Menicon Professor in The Picower Institute and MIT’s Department of Biology. The new findings, funded in part by a 2021 grant from the National Institutes of Health, provide insights into how active zones develop the ability to send neurotransmitters across synapses to their circuit targets. It’s not instant or predestined, the study shows. It can take days to fully mature, and that is regulated by neural activity.

If scientists can fully understand the process, Littleton says, then they can develop molecular strategies to intervene to tweak synaptic transmission when it’s happening too much or too little in disease.

“We’d like to have the levers to push to make synapses stronger or weaker, that’s for sure,” Littleton says. “And so knowing the full range of levers we can tug on to potentially change output would be exciting.”

Littleton Lab research scientist Yuliya Akbergenova led the study published Oct. 14 in the Journal of Neuroscience.

How newborn synapses grow up

In the study, the researchers examined neurons that send the neurotransmitter glutamate across synapses to control muscles in the fly larvae. To study how the active zones in the animals matured, the scientists needed to keep track of their age. That hasn’t been possible before, but Akbergenova overcame the barrier by cleverly engineering the fluorescent protein mMaple, which changes its glow from green to red when zapped with 15 seconds of ultraviolet light, into a component of the glutamate receptors on the receiving side of the synapse. Then, whenever she wanted, she could shine light and all the synapses already formed before that time would glow red, and any new ones that formed subsequently would glow green.

With the ability to track each active zone’s birthday, the authors could then document how active zones developed their ability to increase output over the course of days after birth. The researchers actually watched as synapses were built over many hours by tagging each of eight kinds of proteins that make up an active zone. At first, the active zones couldn’t transmit anything. Then, as some essential early proteins accumulated, they could send out glutamate spontaneously, but not if evoked by electrical stimulation of their host neuron (simulating how that neuron might be signaled naturally in a circuit). Only after several more proteins arrived did active zones possess the mature structure for calcium ions to trigger the fusion of glutamate vesicles to the cell membrane for evoked release across the synapse.

Activity matters

Of course, construction does not go on forever. At some point, the fly larva stops building one synapse and then builds new ones further down the line as the neuronal axon expands to keep up with growing muscles. The researchers wondered whether neural activity had a role in driving that process of finishing up one active zone and moving on to build the next.

To find out, they employed two different interventions to block active zones from being able to release glutamate, thereby preventing synaptic activity. Notably, one of the methods they chose was blocking the action of a protein called Synaptotagmin 1. That’s important because mutations that disrupt the protein in humans are associated with severe intellectual disability and autism. Moreover, the researchers tailored the activity-blocking interventions to just one neuron in each larva because blocking activity in all their neurons would have proved lethal.

In neurons where the researchers blocked activity, they observed two consequences: the neurons stopped building new active zones and instead kept making existing active zones larger and larger. It was as if the neuron could tell the active zone wasn’t releasing glutamate and tried to make it work by giving it more protein material to work with. That effort came at the expense of starting construction on new active zones.

“I think that what it’s trying to do is compensate for the loss of activity,” Littleton says.

Testing indicated that the enlarged active zones the neurons built in hopes of restarting activity were functional (or would have been if the researchers weren’t artificially blocking them). This suggested that the way the neuron sensed that glutamate wasn’t being released was therefore likely to be a feedback signal from the muscle side of the synapse. To test that, the scientists knocked out a glutamate receptor component in the muscle, and when they did, they found that the neurons no longer made their active zones larger.

Littleton says the lab is already looking into the new questions the discoveries raise. In particular: What are the molecular pathways that initiate synapse formation in the first place, and what are the signals that tell an active zone it has finished growing? Finding those answers will bring researchers closer to understanding how to intervene when synaptic active zones aren’t developing properly.

In addition to Littleton and Akbergenova, the paper’s other authors are Jessica Matthias and Sofya Makeyeva.

In addition to the National Institutes of Health, The Freedom Together Foundation provided funding for the study.


A new advising neighborhood takes shape along the Infinite Corridor

The Undergraduate Advising Center’s new home in Building 11 creates a bright, welcoming, and functional destination for MIT undergraduate students.


On any given day, MIT’s famed 825-foot Infinite Corridor serves as a busy, buzzing pedestrian highway, offering campus commuters a quick, if congested, route from point A to B. With the possible exception of MIT Henge twice a year, it doesn’t exactly invite lingering.

Thanks to a recent renovation on the first floor of Building 11, the former location of Student Financial Services, there’s now a compelling reason for students to step off the busy throughfare and pause for conversation or respite.

Dubbed by one onlooker as “the spaceport,” the area has been transformed into an airy, multi-functional hub. Nestled inside is the Undergraduate Advising Center (UAC), which launched in 2023 to provide holistic support for students’ personal and academic growth by providing individualized advising for all four years, offering guidance about and connections to MIT resources, and partnering with faculty and departments to ensure a comprehensive advising experience.

Students can now find another key service conveniently located close by: Career Advising and Professional Development has moved into renovated office suites just down the hall, in Building 7.

“It’s just stunning!” marvels Diep Luu, senior associate dean and director of the UAC. “You can’t help but notice the contrast between the historic architecture and the contemporary design. The space is filled with natural light thanks to the floor-to-ceiling windows, and it makes the environment both energizing and comfortable.”

Designed by Merge Architects, the 5,000 square-foot space opens off the Infinite with several informal public spaces for students and community members. These include a series of soaring, vaulted booths with a variety of tables and seating to support multiple kinds of socialization and/or work, a cozy lounge lined with pi wallpaper (carried out to 10,638 digits after 3.14), and the “social stairs” for informal gatherings and workshops. Beyond that, glass doors lead to the UAC office space, which features open workstations, private advising rooms, and conference rooms with Zoom capability.

“We wanted to incorporate as many different kinds of spaces to accommodate as many different kinds of interactions as we could,” explains Kate Trimble, senior associate dean and chief of staff of the Division of Graduate and Undergraduate Education (GUE), who helped guide the renovation project. “After all, the UAC will support all undergraduate students for their entire four-year MIT journey, through a wide variety of experiences, challenges, and celebrations.”

Homing in on the  “Boardwalk or Park Place of MIT real estate”

The vision for the new district began to percolate in 2022. At the time, GUE (then known as the Office of the Vice Chancellor, or OVC) was focusing on two separate, key priorities: reconfiguring office space in a post-pandemic, flex-work world; and creating a new undergraduate advising center, in accordance with one of the Task Force 2021 recommendations.

A faculty and staff working group gathered information and ideas from offices and programs that had already implemented “flex-space” strategies, such as Human Resources, IS&T, and the MIT Innovation Headquarters. In thinking about an advising center of the size and scope envisioned, Trimble notes, “we quickly zeroed in on the Building 11 space. It’s such a prominent location. Former Vice Chancellor (and current Vice President for Research) Ian A. Waitz referred to it as the “Boardwalk or Park Place of MIT real estate. And if you’re thinking about a center that’s going to serve all undergraduates, you really want it to be convenient and centrally located — and boy, that’s a perfect space.”

As plans were made to relocate Student Financial Services to a new home in Building E17, the renovation team engaged undergraduate students and advising staff in the design process through a series of charrette-style workshops and focus groups. Students shared feedback about spaces on campus where they felt most comfortable, as well as those they disliked. From staff, the team learned which design elements would make the space as functional as possible, allowing for the variety of interactions they typically have with students.

The team selected Merge Architects for the project, Trimble says, because “they understood that we were not looking to build something that was an architectural temple, but rather a functional and fun space that meets the needs of our students and staff. They’ve been creative and responsive partners.” She also credits the MIT Campus Construction group and the Office of Campus Planning for their crucial role in the renovation. “I can’t say enough good things about them. They’ve been superb guides through a long and complicated process.”

A more student-centric Infinite Corridor

Construction wrapped up in late summer, and the UAC held an open house for students on Registration Day, Sept. 3. It buzzed with activity as students admired the space, chatted with UAC staff, took photos, and met the office mascot, Winni, a friendly chocolate Labrador retriever.

“Students have been amazed by the transformation,” says Luu. “We wanted a space that encourages community and collaboration, one that feels alive and dynamic, and the early feedback suggests that’s exactly what’s happening,” Luu explains. “It also gives us a chance to better connect students not only with what the UAC offers, but also with support across the Institute.

“Last year, the UAC offices were behind these two wooden doors in the Infinite Corridor and you had to know that they were there to get to them,” says junior Caleb Mathewos, who has been a UAC orientation leader and captain over the past two years. “The space is very inviting now. I’ve seen people sitting there and working, or just relaxing between classes. I see my friends every now and then, and I’ll stop by and chat with them. Because it’s so much more open, it makes the UAC feel a lot more accessible to students.”

Senior Calvin Macatantan, who’s been involved with the UAC’s First Generation/Low Income Program since his first year and served as an associate advisor and orientation leader, thinks the new space will make it easier for students — especially first years — to find what they need to navigate at MIT. “Before, resources felt scattered across different parts of the Infinite, even though they had similar missions of advising and supporting students. It's nice that there’s a central, welcoming space where those supports connect, and I think that will make a big difference in how students experience MIT.”

The transformation adds significantly to a trend toward creating more student-centric spaces along the Infinite. In the past few years, MIT has added two new study lounges in Building 3, the DEN and the LODGE, and the Department of Materials Science and Engineering built the DMSE Breakerspace in Building 4. This fall, another office suite along the Infinite will be remodeled into a new tutoring hub.

"It’s wonderful to see the UAC space and the whole advising ‘neighborhood,’ if you will, come to fruition,” says Vice Chancellor for Graduate and Undergraduate Education David L. Darmofal. “The need to strengthen undergraduate advising and the opportunity to do so through an Institute advising hub was an outcome of the Task Force 2021 effort, and it’s taken years of thoughtful reflection by many stakeholders to lay the foundation for such a significant sea change in advising. This space is a tangible, visible commitment to putting students first.”


MIT Schwarzman College of Computing welcomes 11 new faculty for 2025

The faculty members occupy core computing and shared positions, bringing varied backgrounds and expertise to the MIT community.


The MIT Schwarzman College of Computing welcomes 11 new faculty members in core computing and shared positions to the MIT community. They bring varied backgrounds and expertise spanning sustainable design, satellite remote sensing, decision theory, and the development of new algorithms for declarative artificial intelligence programming, among others.

“I warmly welcome this talented group of new faculty members. Their work lies at the forefront of computing and its broader impact in the world,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science.

College faculty include those with appointments in the Department of Electrical Engineering and Computer Science (EECS) or in the Institute for Data, Systems, and Society (IDSS), which report into both the MIT Schwarzman College of Computing and the School of Engineering. There are also several new faculty members in shared positions between the college and other MIT departments and sections, including Political Science, Linguistics and Philosophy, History, and Architecture.

“Thanks to another successful year of collaborative searches, we have hired six additional faculty in shared positions, bringing the total to 20,” says Huttenlocher.

The new shared faculty include:

Bailey Flanigan is an assistant professor in the Department of Political Science, holding an MIT Schwarzman College of Computing shared position with EECS. Her research combines tools from social choice theory, game theory, algorithms, statistics, and survey methods to advance political methodology and strengthen democratic participation. She is interested in sampling algorithms, opinion measurement, and the design of democratic innovations like deliberative minipublics and participatory budgeting. Flanigan was a postdoc at Harvard University’s Data Science Initiative, and she earned her PhD in computer science from Carnegie Mellon University.

Brian Hedden PhD ’12 is a professor in the Department of Linguistics and Philosophy, holding an MIT Schwarzman College of Computing shared position with EECS. His research focuses on how we ought to form beliefs and make decisions. His works span epistemology, decision theory, and ethics, including ethics of AI. He is the author of “Reasons without Persons: Rationality, Identity, and Time” (Oxford University Press, 2015) and articles on topics such as collective action problems, legal standards of proof, algorithmic fairness, and political polarization. Prior to joining MIT, he was a faculty member at the Australian National University and the University of Sydney, and a junior research fellow at Oxford University. He received his BA from Princeton University and his PhD from MIT, both in philosophy.

Yunha Hwang is an assistant professor in the Department of Biology, holding an MIT Schwarzman College of Computing shared position with EECS. She is also a member of the Laboratory for Information and Decision Systems. Her research interests span machine learning for sustainable biomanufacturing, microbial evolution, and open science. She serves as the co-founder and chief scientist at Tatta Bio, a scientific nonprofit dedicated to advancing genomic AI for biological discovery. She holds a BS in computer science from Stanford University and a PhD in biology from Harvard University.

Ben Lindquist is an assistant professor in the History Section, holding an MIT Schwarzman College of Computing shared position with EECS. Through a historical lens, his work observes the ways that computing has circulated with ideas of religion, emotion, and divergent thinking. His book, “The Feeling Machine” (University of Chicago Press, forthcoming), follows the history of synthetic speech to examine how emotion became a subject of computer science. He was a postdoc in the Science in Human Culture Program at Northwestern University and earned his PhD in history from Princeton University.

Mariana Popescu is an assistant professor in the Department of Architecture, holding an MIT Schwarzman College of Computing shared position with EECS. She is also a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL). A computational architect and structural designer, Popescu has a strong interest and experience in innovative ways of approaching the fabrication process and use of materials in construction. Her area of expertise is computational and parametric design, with a focus on digital fabrication and sustainable design. Popescu earned her doctorate at ETH Zurich.

Paris Smaragdis SM ’97, PhD ’01 is a professor in the Music and Theater Arts Section, holding an MIT Schwarzman College of Computing shared position with EECS. His research focus lies at the intersection of signal processing and machine learning, especially as it relates to sound and music. Prior to coming to MIT, he worked as a research scientist at Mitsubishi Electric Research Labs, a senior research scientist at Adobe Research, and an Amazon Scholar with Amazon’s AWS. He spent 15 years as a professor at the University of Illinois Urbana Champaign in the Computer Science Department, where he spearheaded the design of the CS+Music program, and served as an associate director of the School of Computer and Data Science. He holds a BMus from Berklee College of Music and earned his PhD in perceptual computing from MIT.

Daniel Varon is an assistant professor in the Department of Aeronautics and Astronautics, holding an MIT Schwarzman College of Computing shared position with IDSS. His work focuses on using satellite observations of atmospheric composition to better understand human impacts on the environment and identify opportunities to reduce them. An atmospheric scientist, Varon is particularly interested in greenhouse gasses, air pollution, and satellite remote sensing. He holds an MS in applied mathematics and a PhD in atmospheric chemistry, both from Harvard University.

In addition, the School of Engineering has adopted the shared faculty search model to hire its first shared faculty member:

Mark Rau is an assistant professor in the Music and Theater Arts Section, holding a School of Engineering shared position with EECS. He is involved in developing graduate programming focused on music technology. He has an interest in musical acoustics, vibration and acoustic measurement, audio signal processing, and physical modeling synthesis. His work focuses on musical instruments and creative audio effects. He holds an MA in music, science, and technology from Stanford, as well as a BS in physics and BMus in jazz from McGill University. He earned his PhD at Stanford’s Center for Computer Research in Music and Acoustics.

The new core faculty are:

Mitchell Gordon is an assistant professor in EECS. He is also a member of CSAIL. In his research, Gordon designs interactive systems and evaluation approaches that bridge principles of human-computer interaction with the realities of machine learning. His work has won awards at conferences in human-computer interaction and artificial intelligence, including a best paper award at CHI and an Oral at NeurIPS. Gordon received a BS from the University of Rochester, and MS and PhD from Stanford University, all in computer science.

Omar Khattab is an assistant professor in EECS. He is also a member of CSAIL. His work focuses on natural language processing, information retrieval, and AI systems. His research includes developing new algorithms and abstractions for declarative AI programming and for composing retrieval and reasoning. He received his BS from Carnegie Mellon University and his PhD from Stanford University, both in computer science.

Rachit Nigam will join EECS as an assistant professor in January 2026. He will also be a member of CSAIL and the Microsystems Technology Laboratories. He works on programming languages and computer architecture to address the design, verification, and usability challenges of specialized hardware. He was previously a visiting scholar at MIT. Nigam earned an MS and PhD in computer science from Cornell University.


Blending neuroscience, AI, and music to create mental health innovations

Media Lab PhD student Kimaya Lecamwasam researches how music can shape well-being.


Computational neuroscientist and singer/songwriter Kimaya (Kimy) Lecamwasam, who also plays electric bass and guitar, says music has been a core part of her life for as long as she can remember. She grew up in a musical family and played in bands all through high school.

“For most of my life, writing and playing music was the clearest way I had to express myself,” says Lecamwasam. “I was a really shy and anxious kid, and I struggled with speaking up for myself. Over time, composing and performing music became central to both how I communicated and to how I managed my own mental health.”

Along with equipping her with valuable skills and experiences, she credits her passion for music as the catalyst for her interest in neuroscience.

“I got to see firsthand not only the ways that audiences reacted to music, but also how much value music had for musicians,” she says. “That close connection between making music and feeling well is what first pushed me to ask why music has such a powerful hold on us, and eventually led me to study the science behind it.”

Lecamwasam earned a bachelor’s degree in 2021 from Wellesley College, where she studied neuroscience — specifically in the Systems and Computational Neuroscience track — and also music. During her first semester, she took a class in songwriting that she says made her more aware of the connections between music and emotions. While studying at Wellesley, she participated in the MIT Undergraduate Research Opportunities Program for three years. Working in the Department of Brain and Cognitive Sciences lab of Emery Brown, the Edward Hood Taplin Professor of Medical Engineering and Computational Neuroscience, she focused primarily on classifying consciousness in anesthetized patients and training brain-computer interface-enabled prosthetics using reinforcement learning.

“I still had a really deep love for music, which I was pursuing in parallel to all of my neuroscience work, but I really wanted to try to find a way to combine both of those things in grad school,” says Lecamwasam. Brown recommended that she look into the graduate programs at the MIT Media Lab within the Program in Media Arts and Sciences (MAS), which turned out to be an ideal fit.

“One thing I really love about where I am is that I get to be both an artist and a scientist,” says Lecamwasam. “That was something that was important to me when I was picking a graduate program. I wanted to make sure that I was going to be able to do work that was really rigorous, validated, and important, but also get to do cool, creative explorations and actually put the research that I was doing into practice in different ways.”

Exploring the physical, mental, and emotional impacts of music

Informed by her years of neuroscience research as an undergraduate and her passion for music, Lecamwasam focused her graduate research on harnessing the emotional potency of music into scalable, non-pharmacological mental health tools. Her master’s thesis focused on “pharmamusicology,” looking at how music might positively affect the physiology and psychology of those with anxiety.

The overarching theme of Lecamwasam’s research is exploring the various impacts of music and affective computing — physically, mentally, and emotionally. Now in the third year of her doctoral program in the Opera of the Future group, she is currently investigating the impact of large-scale live music and concert experiences on the mental health and well-being of both audience members and performers. She is also working to clinically validate music listening, composition, and performance as health interventions, in combination with psychotherapy and pharmaceutical interventions.

Her recent work, in collaboration with Professor Anna Huang’s Human-AI Resonance Lab, assesses the emotional resonance of AI-generated music compared to human-composed music; the aim is to identify more ethical applications of emotion-sensitive music generation and recommendation that preserve human creativity and agency, and can also be used as health interventions. She has co-led a wellness and music workshop at the Wellbeing Summit in Bilbao, Spain, and has presented her work at the 2023 CHI conference on Human Factors in Computing Systems in Hamburg, Germany and the 2024 Audio Mostly conference in Milan, Italy. 

Lecamwasam has collaborated with organizations near and far to implement real-world applications of her research. She worked with Carnegie Hall's Weill Music Institute on its Well-Being Concerts and is currently partnering on a study assessing the impact of lullaby writing on perinatal health with the North Shore Lullaby Project in Massachusetts, an offshoot of Carnegie Hall’s Lullaby Project. Her main international collaboration is with a company called Myndstream, working on projects comparing the emotional resonance of AI-generated music to human-composed music and thinking of clinical and real-world applications. She is also working on a project with the companies PixMob and Empatica (an MIT Media Lab spinoff), centered on assessing the impact of interactive lighting and large-scale live music experiences on emotional resonance in stadium and arena settings.

Building community

“Kimy combines a deep love for — and sophisticated knowledge of — music with scientific curiosity and rigor in ways that represent the Media Lab/MAS spirit at its best,” says Professor Tod Machover, Lecamwasam’s research advisor, Media Lab faculty director, and director of the Opera of the Future group. “She has long believed that music is one of the most powerful and effective ways to create personalized interventions to help stabilize emotional distress and promote empathy and connection. It is this same desire to establish sane, safe, and sustaining environments for work and play that has led Kimy to become one of the most effective and devoted community-builders at the lab.”

Lecamwasam has participated in the SOS (Students Offering Support) program in MAS for a few years, which assists students from a variety of life experiences and backgrounds during the process of applying to the Program in Media Arts and Sciences. She will soon be the first MAS peer mentor as part of a new initiative through which she will establish and coordinate programs including a “buddy system,” pairing incoming master’s students with PhD students as a way to help them transition into graduate student life at MIT. She is also part of the Media Lab’s Studcom, a student-run organization that promotes, facilitates, and creates experiences meant to bring the community together.

“I think everything that I have gotten to do has been so supported by the friends I’ve made in my lab and department, as well as across departments,” says Lecamwasam. “I think everyone is just really excited about the work that they do and so supportive of one another. It makes it so that even when things are challenging or difficult, I’m motivated to do this work and be a part of this community.”


Earthquake damage at deeper depths occurs long after initial activity

While the Earth’s upper crust recovers quickly from seismic activity, new research finds the mid-crust recovers much more slowly, if at all.


Earthquakes often bring to mind images of destruction, of the Earth breaking open and altering landscapes. But after an earthquake, the area around it undergoes a period of post-seismic deformation, where areas that didn’t break experience new stress as a result of the sudden change in the surroundings. Once it has adjusted to this new stress, it reaches a state of recovery.

Geologists have often thought that this recovery period was a smooth, continuous process. But MIT research published recently in Science has found evidence that while healing occurs quickly at shallow depths — roughly above 10 km — deeper depths recover more slowly, if at all.

“If you were to look before and after in the shallow crust, you wouldn’t see any permanent change. But there’s this very permanent change that persists in the mid-crust,” says Jared Bryan, a graduate student in the MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS) and lead author on the paper.

The paper’s other authors include EAPS Professor William Frank and Pascal Audet from the University of Ottawa.

Everything but the quakes

In order to assemble a full understanding of how the crust behaves before, during, and after an earthquake sequence, the researchers looked at seismic data from the 2019 Ridgecrest earthquakes in California. This immature fault zone experienced the largest earthquake in the state in 20 years, and tens of thousands of aftershocks over the following year. They then removed seismic data created by the sequence and only looked at waves generated by other seismic activity around the world to see how their paths through the Earth changed before and after the sequence.

“One person’s signal is another person’s noise,” says Bryan. They also used general ambient noise from sources like ocean waves and traffic that are also picked up by seismometers. Then, using a technique called a receiver function, they were able to see the speed of the waves as they traveled and how it changed due to conditions in the Earth such as rock density and porosity, much in the same way we use sonar to see how acoustic waves change when they interact with objects. With all this information, they were able to construct basic maps of the Earth around the Ridgecrest fault zone before and after the sequence.

What they found was that the shallow crust, extending about 10 km into the Earth, recovered over the course of a few months. In contrast, deeper depths in the mid-crust didn’t experience immediate damage, but rather changed over the same timescale as shallow depths recovered.

“What was surprising is that the healing in the shallow crust was so quick, and then you have this complementary accumulation occurring, not at the time of the earthquake, but instead over the post-seismic phase,” says Bryan.

Balancing the energy budget

Understanding how recovery plays out at different depths is crucial for determining how energy is spent during different parts of the seismic process, which includes activities such as the release of energy as waves, the creation of new fractures, or energy being stored elastically in the surrounding areas. Altogether, this is collectively known as the energy budget, and it is a useful tool for understanding how damage accumulates and recovers over time.

What remains unclear is the timescales at which deeper depths recover, if at all. The paper presents two possible scenarios to explain why that might be: one in which the deep crust recovers over a much longer timescale than they observed, or one where it never recovers at all.

“Either of those are not what we expected,” says Frank. “And both of them are interesting.”

Further research will require more observations to build out a more detailed picture to see at what depth the change becomes more pronounced. In addition, Bryan wants to look at other areas, such as more mature faults that experience higher levels of seismic activity, to see if it changes the results.

“We’ll let you know in 1,000 years whether it’s recovered,” says Bryan.


New MIT initiative seeks to transform rare brain disorders research

The Rare Brain Disorders Nexus aims to accelerate the development of novel therapies for a spectrum of uncommon brain diseases.


More than 300 million people worldwide are living with rare disorders — many of which have a genetic cause and affect the brain and nervous system — yet the vast majority of these conditions lack an approved therapy. Because each rare disorder affects fewer than 65 out of every 100,000 people, studying these disorders and creating new treatments for them is especially challenging.

Thanks to a generous philanthropic gift from Ana Méndez ’91 and Rajeev Jayavant ’86, EE ’88, SM ’88, MIT is now poised to fill gaps in this research landscape. By establishing the Rare Brain Disorders Nexus — or RareNet — at MIT's McGovern Institute for Brain Research, the alumni aim to convene leaders in neuroscience research, clinical medicine, patient advocacy, and industry to streamline the lab-to-clinic pipeline for rare brain disorder treatments.

“Ana and Rajeev’s commitment to MIT will form crucial partnerships to propel the translation of scientific discoveries into promising therapeutics and expand the Institute’s impact on the rare brain disorders community,” says MIT President Sally Kornbluth. “We are deeply grateful for their pivotal role in advancing such critical science and bringing attention to conditions that have long been overlooked.”

Building new coalitions

Several hurdles have slowed the lab-to-clinic pipeline for rare brain disorder research. It is difficult to secure a sufficient number of patients per study, and current research efforts are fragmented, since each study typically focuses on a single disorder (there are more than 7,000 known rare disorders, according to the World Health Organization). Pharmaceutical companies are often reluctant to invest in emerging treatments due to a limited market size and the high costs associated with preparing drugs for commercialization.

Méndez and Jayavant envision that RareNet will finally break down these barriers. “Our hope is that RareNet will allow leaders in the field to come together under a shared framework and ignite scientific breakthroughs across multiple conditions. A discovery for one rare brain disorder could unlock new insights that are relevant to another,” says Jayavant. “By congregating the best minds in the field, we are confident that MIT will create the right scientific climate to produce drug candidates that may benefit a spectrum of uncommon conditions.”

Guoping Feng, the James W. (1963) and Patricia T. Poitras Professor in Neuroscience and associate director of the McGovern Institute, will serve as RareNet’s inaugural faculty director. Feng holds a strong record of advancing studies on therapies for neurodevelopmental disorders, including autism spectrum disorders, Williams syndrome, and uncommon forms of epilepsy. His team’s gene therapy for Phelan-McDermid syndrome, a rare and profound autism spectrum disorder, has been licensed to Jaguar Gene Therapy and is currently undergoing clinical trials. “RareNet pioneers a unique model for biomedical research — one that is reimagining the role academia can play in developing therapeutics,” says Feng.

RareNet plans to deploy two major initiatives: a global consortium and a therapeutic pipeline accelerator. The consortium will form an international network of researchers, clinicians, and patient groups from the outset. It seeks to connect siloed research efforts, secure more patient samples, promote data sharing, and drive a strong sense of trust and goal alignment across the RareNet community. Partnerships within the consortium will support the aim of the therapeutic pipeline accelerator: to de-risk early lab discoveries and expedite their translation to clinic. By fostering more targeted collaborations — especially between academia and industry — the accelerator will prepare potential treatments for clinical use as efficiently as possible.

MIT labs are focusing on four uncommon conditions in the first wave of RareNet projects: Rett syndrome, prion disease, disorders linked to SYNGAP1 mutations, and Sturge-Weber syndrome. The teams are working to develop novel therapies that can slow, halt, or reverse dysfunctions in the brain and nervous system.

These efforts will build new bridges to connect key stakeholders across the rare brain disorders community and disrupt conventional research approaches. “Rajeev and I are motivated to seed powerful collaborations between MIT researchers, clinicians, patients, and industry,” says Méndez. “Guoping Feng clearly understands our goal to create an environment where foundational studies can thrive and seamlessly move toward clinical impact.”

“Patient and caregiver experiences, and our foreseeable impact on their lives, will guide us and remain at the forefront of our work,” Feng adds. “For far too long has the rare brain disorders community been deprived of life-changing treatments — and, importantly, hope. RareNet gives us the opportunity to transform how we study these conditions, and to do so at a moment when it’s needed more than ever.”


Geologists discover the first evidence of 4.5-billion-year-old “proto Earth”

Materials from ancient rocks could reveal conditions in the early solar system that shaped the early Earth and other planets.


Scientists at MIT and elsewhere have discovered extremely rare remnants of “proto Earth,” which formed about 4.5 billion years ago, before a colossal collision irreversibly altered the primitive planet’s composition and produced the Earth as we know today. Their findings, reported today in the journal Nature Geosciences, will help scientists piece together the primordial starting ingredients that forged the early Earth and the rest of the solar system.

Billions of years ago, the early solar system was a swirling disk of gas and dust that eventually clumped and accumulated to form the earliest meteorites, which in turn merged to form the proto Earth and its neighboring planets.

In this earliest phase, Earth was likely rocky and bubbling with lava. Then, less than 100 million years later, a Mars-sized meteorite slammed into the infant planet in a singular “giant impact” event that completely scrambled and melted the planet’s interior, effectively resetting its chemistry. Whatever original material the proto Earth was made from was thought to have been altogether transformed.

But the MIT team’s findings suggest otherwise. The researchers have identified a chemical signature in ancient rocks that is unique from most other materials found in the Earth today. The signature is in the form of a subtle imbalance in potassium isotopes discovered in samples of very old and very deep rocks. The team determined that the potassium imbalance could not have been produced by any previous large impacts or geological processes occurring in the Earth presently.

The most likely explanation for the samples’ chemical composition is that they must be leftover material from the proto Earth that somehow remained unchanged, even as most of the early planet was impacted and transformed.

“This is maybe the first direct evidence that we’ve preserved the proto Earth materials,” says Nicole Nie, the Paul M. Cook Career Development Assistant Professor of Earth and Planetary Sciences at MIT. “We see a piece of the very ancient Earth, even before the giant impact. This is amazing because we would expect this very early signature to be slowly erased through Earth’s evolution.”

The study’s other authors include Da Wang of Chengdu University of Technology in China, Steven Shirey and Richard Carlson of the Carnegie Institution for Science in Washington, Bradley Peters of ETH Zürich in Switzerland, and James Day of Scripps Institution of Oceanography in California.

A curious anomaly

In 2023, Nie and her colleagues analyzed many of the major meteorites that have been collected from sites around the world and carefully studied. Before impacting the Earth, these meteorites likely formed at various times and locations throughout the solar system, and therefore represent the solar system’s changing conditions over time. When the researchers compared the chemical compositions of these meteorite samples to Earth, they identified among them a “potassium isotopic anomaly.”

Isotopes are slightly different versions of an element that have the same number of protons but a different number of neutrons. The element potassium can exist in one of three naturally-occurring isotopes, with mass numbers (protons plus neutrons) of 39, 40, and 41, respectively. Wherever potassium has been found on Earth, it exists in a characteristic combination of isotopes, with potassium-39 and potassium-41 being overwhelmingly dominant. Potassium-40 is present, but at a vanishingly small percentage in comparison.

Nie and her colleagues discovered that the meteorites they studied showed balances of potassium isotopes that were different from most materials on Earth. This potassium anomaly suggested that any material that exhibits a similar anomaly likely predates Earth’s present composition. In other words, any potassium imbalance would be a strong sign of material from the proto Earth, before the giant impact reset the planet’s chemical composition.

“In that work, we found that different meteorites have different potassium isotopic signatures, and that means potassium can be used as a tracer of Earth’s building blocks,” Nie explains.

“Built different”

In the current study, the team looked for signs of potassium anomalies not in meteorites, but within the Earth. Their samples include rocks, in powder form, from Greenland and Canada, where some of the oldest preserved rocks are found. They also analyzed lava deposits collected from Hawaii, where volcanoes have brought up some of the Earth’s earliest, deepest materials from the mantle (the planet’s thickest layer of rock that separates the crust from the core).

“If this potassium signature is preserved, we would want to look for it in deep time and deep Earth,” Nie says.

The team first dissolved the various powder samples in acid, then carefully isolated any potassium from the rest of the sample and used a special mass spectrometer to measure the ratio of each of potassium’s three isotopes. Remarkably, they identified in the samples an isotopic signature that was different from what’s been found in most materials on Earth.

Specifically, they identified a deficit in the potassium-40 isotope. In most materials on Earth, this isotope is already an insignificant fraction compared to potassium’s other two isotopes. But the researchers were able to discern that their samples contained an even smaller percentage of potassium-40. Detecting this tiny deficit is like spotting a single grain of brown sand in a bucket rather than a scoop full of of yellow sand.

The team found that, indeed, the samples exhibited the potassium-40 deficit, showing that the materials “were built different,” says Nie, compared to most of what we see on Earth today.

But could the samples be rare remnants of the proto Earth? To answer this, the researchers assumed that this might be the case. They reasoned that if the proto Earth were originally made from such potassium-40-deficient materials, then most of this material would have undergone chemical changes — from the giant impact and subsequent, smaller meteorite impacts — that ultimately resulted in the materials with more potassium-40 that we see today. 

The team used compositional data from every known meteorite and carried out simulations of how the samples’ potassium-40 deficit would change following impacts by these meteorites and by the giant impact. They also simulated geological processes that the Earth experienced over time, such as the heating and mixing of the mantle. In the end, their simulations produced a composition with a slightly higher fraction of potassium-40 compared to the samples from Canada, Greenland, and Hawaii. More importantly, the simulated compositions matched those of most modern-day materials.

The work suggests that materials with a potassium-40 deficit are likely leftover original material from the proto Earth.

Curiously, the samples’ signature isn’t a precise match with any other meteorite in geologists’ collections. While the meteorites in the team’s previous work showed potassium anomalies, they aren’t exactly the deficit seen in the proto Earth samples. This means that whatever meteorites and materials originally formed the proto Earth have yet to be discovered.

“Scientists have been trying to understand Earth’s original chemical composition by combining the compositions of different groups of meteorites,” Nie says. “But our study shows that the current meteorite inventory is not complete, and there is much more to learn about where our planet came from.”

This work was supported, in part, by NASA and MIT.


Gene-Wei Li named associate head of the Department of Biology

The associate professor aims to help the department continue to be a worldwide leader in education, biological sciences, and fundamental research.


Associate Professor Gene-Wei Li has accepted the position of associate head of the MIT Department of Biology, starting in the 2025-26 academic year. 

Li, who has been a member of the department since 2015, brings a history of departmental leadership, service, and research and teaching excellence to his new role. He has received many awards, including a Sloan Research Fellowship (2016), an NSF Career Award (2019), Pew and Searle scholarships, and MIT’s Committed to Caring Award (2020). In 2024, he was appointed as a Howard Hughes Medical Institute (HHMI) Investigator

“I am grateful to Gene-Wei for joining the leadership team,” says department head Amy E. Keating, the Jay A. Stein (1968) Professor of Biology and professor of biological engineering. “Gene will be a key leader in our educational initiatives, both digital and residential, and will be a critical part of keeping our department strong and forward-looking.” 

A great environment to do science

Li says he was inspired to take on the role in part because of the way MIT Biology facilitates career development during every stage — from undergraduate and graduate students to postdocs and junior faculty members, as he was when he started in the department as an assistant professor just 10 years ago. 

“I think we all benefit a lot from our environment, and I think this is a great environment to do science and educate people, and to create a new generation of scientists,” he says. “I want us to keep doing well, and I’m glad to have the opportunity to contribute to this effort.” 

As part of his portfolio as associate department head, Li will continue in the role of scientific director of the Koch Biology Building, Building 68. In the last year, the previous scientific director, Stephen Bell, Uncas and Helen Whitaker Professor of Biology and HHMI Investigator, has continued to provide support and ensured a steady ramp-up, transitioning Li into his new duties. The building, which opened its doors in 1994, is in need of a slate of updates and repairs. 

Although Li will be managing more administrative duties, he has provided a stable foundation for his lab to continue its interdisciplinary work on the quantitative biology of gene expression, parsing the mechanisms by which cells control the levels of their proteins and how this enables cells to perform their functions. His recent work includes developing a method that leverages the AI tool AlphaFold to predict whether protein fragments can recapitulate the native interactions of their full-length counterparts.  

“I’m still very heavily involved, and we have a lab environment where everyone helps each other. It’s a team, and so that helps elevate everyone,” he says. “It’s the same with the whole building: nobody is working by themselves, so the science and administrative parts come together really nicely.” 

Teaching for the future

Li is considering how the department can continue to be a global leader in biological sciences while navigating the uncertainty surrounding academia and funding, as well as the likelihood of reduced staff support and tightening budgets.

“The question is: How do you maintain excellence?” Li says. “That involves recruiting great people and giving them the resources that they need, and that’s going to be a priority within the limitations that we have to work with.” 

Li will also be serving as faculty advisor for the MIT Biology Teaching and Learning Group, headed by Mary Ellen Wiltrout, and will serve on the Department of Biology Digital Learning Committee and the new Open Learning Biology Advisory Committee. Li will serve in the latter role in order to represent the department and work with new faculty member and HHMI Investigator Ron Vale on Institute-level online learning initiatives. Li will also chair the Biology Academic Planning Committee, which will help develop a longer-term outlook on faculty teaching assignments and course offerings. 

Li is looking forward to hearing from faculty and students about the way the Institute teaches, and how it could be improved, both for the students on campus and for the online learners from across the world. 

“There are a lot of things that are changing; what are the core fundamentals that the students need to know, what should we teach them, and how should we teach them?” 

Although the commitment to teaching remains unchanged, there may be big transitions on the horizon. With two young children in school, Li is all too aware that the way that students learn today is very different from what he grew up with, and also very different from how students were learning just five or 10 years ago — writing essays on a computer, researching online, using AI tools, and absorbing information from media like short-form YouTube videos. 

“There’s a lot of appeal to a shorter format, but it’s very different from the lecture-based teaching style that has worked for a long time,” Li says. “I think a challenge we should and will face is figuring out the best way to communicate the core fundamentals, and adapting our teaching styles to the next generation of students.” 

Ultimately, Li is excited about balancing his research goals along with joining the department’s leadership team, and knows he can look to his fellow researchers in Building 68 and beyond for support.

“I’m privileged to be working with a great group of colleagues who are all invested in these efforts,” Li says. “Different people may have different ways of doing things, but we all share the same mission.” 


Immune-informed brain aging research offers new treatment possibilities, speakers say

Speakers at MIT’s Aging Brain Initiative symposium described how immune system factors during aging contribute to Alzheimer’s, Parkinson’s and other conditions. The field is leveraging that knowledge to develop new therapies.


Understanding how interactions between the central nervous system and the immune system contribute to problems of aging, including Alzheimer’s disease, Parkinson’s disease, arthritis, and more, can generate new leads for therapeutic development, speakers said at MIT’s symposium “The Neuro-Immune Axis and the Aging Brain” on Sept 18.

“The past decade has brought rapid progress in our understanding of how adaptive and innate immune systems impact the pathogenesis of neurodegenerative disorders,” said Picower Professor Li-Huei Tsai, director of The Picower Institute for Learning and Memory and MIT’s Aging Brain Initiative (ABI), in her introduction to the event, which more than 450 people registered to attend. “Together, today’s speakers will trace how the neuro-immune axis shapes brain health and disease … Their work converges on the promise of immunology-informed therapies to slow or prevent neurodegeneration and age-related cognitive decline.”

For instance, keynote speaker Michal Schwartz of the Weizmann Institute in Israel described her decades of pioneering work to understand the neuro-immune “ecosystem.” Immune cells, she said, help the brain heal, and support many of its functions, including its “plasticity,” the ability it has to adapt to and incorporate new information. But Schwartz’s lab also found that an immune signaling cascade can arise with aging that undermines cognitive function. She has leveraged that insight to investigate and develop corrective immunotherapies that improve the brain’s immune response to Alzheimer’s both by rejuvenating the brain’s microglia immune cells and bringing in the help of peripheral immune cells called macrophages. Schwartz has brought the potential therapy to market as the chief science officer of ImmunoBrain, a company testing it in a clinical trial.

In her presentation, Tsai noted recent work from her lab and that of computer science professor and fellow ABI member Manolis Kellis showing that many of the genes associated with Alzheimer’s disease are most strongly expressed in microglia, giving it an expression profile more similar to autoimmune disorders than to many psychiatric ones (where expression of disease-associated genes typically is highest in neurons). The study showed that microglia become “exhausted” over the course of disease progression, losing their cellular identity and becoming harmfully inflammatory.

“Genetic risk, epigenomic instability, and microglia exhaustion really play a central role in Alzheimer’s disease,” Tsai said, adding that her lab is now also looking into how immune T cells, recruited by microglia, may also contribute to Alzheimer’s disease progression.

The body and the brain

The neuro-immune “axis” connects not only the nervous and immune systems, but also extends between the whole body and the brain, with numerous implications for aging. Several speakers focused on the key conduit: the vagus nerve, which runs from the brain to the body’s major organs.

For instance, Sara Prescott, an investigator in the Picower Institute and an MIT assistant professor of biology, presented evidence her lab is amassing that the brain’s communication via vagus nerve terminals in the body’s airways is crucial for managing the body’s defense of respiratory tissues. Given that we inhale about 20,000 times a day, our airways are exposed to many environmental challenges, Prescott noted, and her lab and others are finding that the nervous system interacts directly with immune pathways to mount physiological responses. But vagal reflexes decline in aging, she noted, increasing susceptibility to infection, and so her lab is now working in mouse models to study airway-to-brain neurons throughout the lifespan to better understand how they change with aging.

In his talk, Caltech Professor Sarkis Mazmanian focused on work in his lab linking the gut microbiome to Parkinson’s disease (PD), for instance by promoting alpha-synuclein protein pathology and motor problems in mouse models. His lab hypothesizes that the microbiome can nucleate alpha-synuclein in the gut via a bacterial amyloid protein that may subsequently promote pathology in the brain, potentially via the vagus nerve. Based on its studies, the lab has developed two interventions. One is giving alpha-synuclein overexpressing mice a high-fiber diet to increase short-chain fatty acids in their gut, which actually modulates the activity of microglia in the brain. The high-fiber diet helps relieve motor dysfunction, corrects microglia activity, and reduces protein pathology, he showed. Another is a drug to disrupt the bacterial amyloid in the gut. It prevents alpha synuclein formation in the mouse brain and ameliorates PD-like symptoms. These results are pending publication.

Meanwhile, Kevin Tracey, professor at Hofstra University and Northwell Health, took listeners on a journey up and down the vagus nerve to the spleen, describing how impulses in the nerve regulate immune system emissions of signaling molecules, or “cytokines.” Too great a surge can become harmful, for instance causing the autoimmune disorder rheumatoid arthritis. Tracey described how a newly U.S. Food and Drug Administration-approved pill-sized neck implant to stimulate the vagus nerve helps patients with severe forms of the disease without suppressing their immune system.

The brain’s border

Other speakers discussed opportunities for understanding neuro-immune interactions in aging and disease at the “borders” where the brain’s and body’s immune system meet. These areas include the meninges that surround the brain, the choroid plexus (proximate to the ventricles, or open spaces, within the brain), and the interface between brain cells and the circulatory system.

For instance, taking a cue from studies showing that circadian disruptions are a risk factor for Alzheimer’s disease, Harvard Medical School Professor Beth Stevens of Boston Children’s Hospital described new research in her lab that examined how brain immune cells may function differently around the day-night cycle. The project, led by newly minted PhD Helena Barr, found that “border-associated macrophages” — long-lived immune cells residing in the brain’s borders — exhibited circadian rhythms in gene expression and function. Stevens described how these cells are tuned by the circadian clock to “eat” more during the rest phase, a process that may help remove material draining from the brain, including Alzheimer’s disease-associated peptides such as amyloid-beta. So, Stevens hypothesizes, circadian disruptions, for example due to aging or night-shift work, may contribute to disease onset by disrupting the delicate balance in immune-mediated “clean-up” of the brain and its borders.

Following Stevens at the podium, Washington University Professor Marco Colonna traced how various kinds of macrophages, including border macrophages and microglia, develop from the embryonic stage. He described the different gene-expression programs that guide their differentiation into one type or another. One gene he highlighted, for instance, is necessary for border macrophages along the brain’s vasculature to help regulate the waste-clearing cerebrospinal fluid (CSF) flow that Stevens also discussed. Knocking out the gene also impairs blood flow. Importantly, his lab has found that versions of the gene may be somewhat protective against Alzheimer’s, and that regulating expression of the gene could be a therapeutic strategy.

Colonna’s WashU colleague Jonathan Kipnis (a former student of Schwartz) also discussed macrophages that are associated with the particular border between brain tissue and the plumbing alongside the vasculature that carries CSF. The macrophages, his lab showed in 2022, actively govern the flow of CSF. He showed that removing the macrophages let Alzheimer’s proteins accumulate in mice. His lab is continuing to investigate ways in which these specific border macrophages may play roles in disease. He’s also looking in separate studies of how the skull’s brain marrow contributes to the population of immune cells in the brain and may play a role in neurodegeneration.

For all the talk of distant organs and the brain’s borders, neurons themselves were never far from the discussion. Harvard Medical School Professor Isaac Chiu gave them their direct due in a talk focusing on how they participate in their own immune defense, for instance by directly sensing pathogens and giving off inflammation signals upon cell death. He discussed a key molecule in that latter process, which is expressed among neurons all over the brain.

Whether they were looking within the brain, at its border, or throughout the body, speakers showed that age-related nervous system diseases are not only better understood but also possibly better treated by accounting not only for the nerve cells, but their immune system partners. 


Riccardo Comin, two MIT alumni named 2025 Moore Experimental Physics Investigators

MIT physicist seeks to use award to study magnetoelectric multiferroics that could lead to energy-efficient storage devices.


MIT associate professor of physics Riccardo Comin has been selected as 2025 Experimental Physics Investigator by the Gordon and Betty Moore Foundation. Two MIT physics alumni — Gyu-Boong Jo PhD ’10 of Rice University, and Ben Jones PhD ’15 of the University of Texas at Arlington — were also among this year’s cohort of 22 honorees.

The prestigious Experimental Physics Investigators (EPI) Initiative recognizes mid-career scientists advancing the frontiers of experimental physics. Each award provides $1.3 million over five years to accelerate breakthroughs and strengthen the experimental physics community.

At MIT, Comin investigates magnetoelectric multiferroics by engineering interfaces between two-dimensional materials and three-dimensional oxide thin films. His research aims to overcome long-standing limitations in spin-charge coupling by moving beyond epitaxial constraints, enabling new interfacial phases and coupling mechanisms. In these systems, Comin’s team explores the coexistence and proximity of magnetic and ferroelectric order, with a focus on achieving strong magnetoelectric coupling. This approach opens new pathways for designing tunable multiferroic systems unconstrained by traditional synthesis methods.

Comin’s research expands the frontier of multiferroics by demonstrating stacking-controlled magnetoelectric coupling at 2D–3D interfaces. This approach enables exploration of fundamental physics in a versatile materials platform and opens new possibilities for spintronics, sensing, and data storage. By removing constraints of epitaxial growth, Comin’s work lays the foundation for microelectronic and spintronic devices with novel functionalities driven by interfacial control of spin and polarization.

Comin’s project, Interfacial MAGnetoElectrics (I-MAGinE), aims to study a new class of artificial magnetoelectric multiferroics at the interfaces between ferroic materials from 2D van der Waals systems and 3D oxide thin films. The team aims to identify and understand novel magnetoelectric effects to demonstrate the viability of stacking-controlled interfacial magnetoelectric coupling. This research could lead to significant contributions in multiferroics, and could pave the way for innovative, energy-efficient storage devices.

“This research has the potential to make significant contributions to the field of multiferroics by demonstrating the viability of stacking-controlled interfacial magnetoelectric coupling,” according to Comin’s proposal. “The findings could pave the way for future applications in spintronics, data storage, and sensing. It offers a significant opportunity to explore fundamental physics questions in a novel materials platform, while laying the ground for future technological applications, including microelectronic and spintronic devices with new functionalities.”

Comin’s group has extensive experience in researching 2D and 3D ferroic materials and electronically ordered oxide thin films, as well as ultrathin van der Waals magnets, ferroelectrics, and multiferroics. Their lab is equipped with state-of-the-art tools for material synthesis, including bulk crystal growth of van der Waals materials and pulsed laser deposition targets, along with comprehensive fabrication and characterization capabilities. Their expertise in magneto-optical probes and advanced magnetic X-ray techniques promises to enable in-depth studies of electronic and magnetic structures, specifically spin-charge coupling, in order to contribute significantly to understanding spin-charge coupling in magnetochiral materials.

The coexistence of ferroelectricity and ferromagnetism in a single material, known as multiferroicity, is rare, and strong spin-charge coupling is even rarer due to fundamental chemical and electronic structure incompatibilities.

The few known bulk multiferroics with strong magnetoelectric coupling generally rely on inversion symmetry-breaking spin arrangements, which only emerge at low temperatures, limiting practical applications. While interfacial magnetoelectric multiferroics offer an alternative, achieving efficient spin-charge coupling often requires stringent conditions like epitaxial growth and lattice matching, which limit material combinations. This research proposes to overcome these limitations by using non-epitaxial interfaces of 2D van der Waals materials and 3D oxide thin films.

Unique features of this approach include leveraging the versatility of 2D ferroics for seamless transfer onto any substrate, eliminating lattice matching requirements, and exploring new classes of interfacial magnetoelectric effects unconstrained by traditional thin-film synthesis limitations.

Launched in 2018, the Moore Foundation’s EPI Initiative cultivates collaborative research environments and provides research support to promote the discovery of new ideas and emphasize community building.

“We have seen numerous new connections form and new research directions pursued by both individuals and groups based on conversations at these gatherings,” says Catherine Mader, program officer for the initiative.

The Gordon and Betty Moore Foundation was established to create positive outcomes for future generations. In pursuit of that vision, it advances scientific discovery, environmental conservation, and the special character of the San Francisco Bay Area.


MIT physicists improve the precision of atomic clocks

A new method turns down quantum noise that obscures the “ticking” of atoms, and could enable stable, transportable atomic clocks.


Every time you check the time on your phone, make an online transaction, or use a navigation app, you are depending on the precision of atomic clocks.

An atomic clock keeps time by relying on the “ticks” of atoms as they naturally oscillate at rock-steady frequencies. Today’s atomic clocks operate by tracking cesium atoms, which tick over 10 billion times per second. Each of those ticks is precisely tracked using lasers that oscillate in sync, at microwave frequencies.

Scientists are developing next-generation atomic clocks that rely on even faster-ticking atoms such as ytterbium, which can be tracked with lasers at higher, optical frequencies. If they can be kept stable, optical atomic clocks could track even finer intervals of time, up to 100 trillion times per second.

Now, MIT physicists have found a way to improve the stability of optical atomic clocks, by reducing “quantum noise” — a fundamental measurement limitation due to the effects of quantum mechanics, which obscures the atoms’ pure oscillations. In addition, the team discovered that an effect of a clock’s laser on the atoms, previously considered irrelevant, can be used to further stabilize the laser.

The researchers developed a method to harness a laser-induced “global phase” in ytterbium atoms, and have boosted this effect with a quantum-amplification technique. The new approach doubles the precision of an optical atomic clock, enabling it to discern twice as many ticks per second compared to the same setup without the new method. What’s more, they anticipate that the precision of the method should increase steadily with the number of atoms in an atomic clock.

The researchers detail the method, which they call global phase spectroscopy, in a study appearing today in the journal Nature. They envision that the clock-stabilizing technique could one day enable portable optical atomic clocks that can be transported to various locations to measure all manner of phenomena.

“With these clocks, people are trying to detect dark matter and dark energy, and test whether there really are just four fundamental forces, and even to see if these clocks can predict earthquakes,” says study author Vladan Vuletić, the Lester Wolfe Professor of Physics at MIT. “We think our method can help make these clocks transportable and deployable to where they’re needed.”

The paper’s co-authors are Leon Zaporski, Qi Liu, Gustavo Velez, Matthew Radzihovsky, Zeyang Li, Simone Colombo, and Edwin Pedrozo-Peñafiel, who are members of the MIT-Harvard Center for Ultracold Atoms and the MIT Research Laboratory of Electronics.

Ticking time

In 2020, Vuletić and his colleagues demonstrated that an atomic clock could be made more precise by quantumly entangling the clock’s atoms. Quantum entanglement is a phenomenon by which particles can be made to behave in a collective, highly correlated manner. When atoms are quantumly entangled, they redistribute any noise, or uncertainty in measuring the atoms’ oscillations, in a way that reveals a clearer, more measurable “tick.”

In their previous work, the team induced quantum entanglement among several hundred ytterbium atoms that they first cooled and trapped in a cavity formed by two curved mirrors. They sent a laser into the cavity, which bounced thousands of times between the mirrors, interacting with the atoms and causing the ensemble to entangle. They were able to show that quantum entanglement could improve the precision of existing atomic clocks by essentially reducing the noise, or uncertainty between the laser’s and atoms’ tick rates.

At the time, however, they were limited by the ticking instability of the clock’s laser. In 2022, the same team derived a way to further amplify the difference in laser versus atom tick rates with “time reversal” — a trick that relies on entangling and de-entangling the atoms to boost the signal acquired in between.

However, in that work the team was still using traditional microwaves, which oscillate at much lower frequencies than the optical frequency standards ytterbium atoms can provide. It was as if they had painstakingly lifted a film of dust off a painting, only to then photograph it with a low-resolution camera.

“When you have atoms that tick 100 trillion times per second, that’s 10,000 times faster than the frequency of microwaves,” Vuletić says. “We didn’t know at the time how to apply these methods to higher-frequency optical clocks that are much harder to keep stable.”

About phase

In their new study, the team has found a way to apply their previously developed approach of time reversal to optical atomic clocks. They then sent in a laser that oscillates near the optical frequency of the entangled atoms.

“The laser ultimately inherits the ticking of the atoms,” says first author Zaporski. “But in order for this inheritance to hold for a long time, the laser has to be quite stable.”

The researchers found they were able to improve the stability of an optical atomic clock by taking advantage of a phenomenon that scientists had assumed was inconsequential to the operation. They realized that when light is sent through entangled atoms, the interaction can cause the atoms to jump up in energy, then settle back down into their original energy state and still carry the memory about their round trip.

“One might think we’ve done nothing,” Vuletić says. “You get this global phase of the atoms, which is usually considered irrelevant. But this global phase contains information about the laser frequency.”

In other words, they realized that the laser was inducing a measurable change in the atoms, despite bringing them back to the original energy state, and that the magnitude of this change depends on the laser’s frequency.

“Ultimately, we are looking for the difference of laser frequency and the atomic transition frequency,” explains co-author Liu. “When that difference is small, it gets drowned by quantum noise. Our method amplifies this difference above this quantum noise.”

In their experiments, the team applied this new approach and found that through entanglement they were able to double the precision of their optical atomic clock.

“We saw that we can now resolve nearly twice as small a difference in the optical frequency or, the clock ticking frequency, without running into the quantum noise limit,” Zaporski says. “Although it’s a hard problem in general to run atomic clocks, the technical benefits of our method it will make it easier, and we think this can enable stable, transportable atomic clocks.”

This research was supported, in part, by the U.S. Office of Naval Research, the National Science Foundation, the U.S. Defense Advanced Research Projects Agency, the U.S. Department of Energy, the U.S. Office of Science, the National Quantum Information Science Research Centers, and the Quantum Systems Accelerator.


Engineered “natural killer” cells could help fight cancer

A new study identifies genetic modifications that make these immune cells, known as CAR-NK cells, more effective at destroying cancer cells.


One of the newest weapons that scientists have developed against cancer is a type of engineered immune cell known as CAR-NK (natural killer) cells. Similar to CAR-T cells, these cells can be programmed to attack cancer cells.

MIT and Harvard Medical School researchers have now come up with a new way to engineer CAR-NK cells that makes them much less likely to be rejected by the patient’s immune system, which is a common drawback of this type of treatment.

The new advance may also make it easier to develop “off-the-shelf” CAR-NK cells that could be given to patients as soon as they are diagnosed. Traditional approaches to engineering CAR-NK or CAR-T cells usually take several weeks.

“This enables us to do one-step engineering of CAR-NK cells that can avoid rejection by host T cells and other immune cells. And, they kill cancer cells better and they’re safer,” says Jianzhu Chen, an MIT professor of biology, a member of the Koch Institute for Integrative Cancer Research,and one of the senior authors of the study.

In a study of mice with humanized immune systems, the researchers showed that these CAR-NK cells could destroy most cancer cells while evading the host immune system.

Rizwan Romee, an associate professor of medicine at Harvard Medical School and Dana-Farber Cancer Institute, is also a senior author of the paper, which appears today in Nature Communications. The paper’s lead author is Fuguo Liu, a postdoc at the Koch Institute and a research fellow at Dana-Farber.

Evading the immune system

NK cells are a critical part of the body’s natural immune defenses, and their primary responsibility is to locate and kill cancer cells and virus-infected cells. One of their cell-killing strategies, also used by T cells, is a process called degranulation. Through this process, immune cells release a protein called perforin, which can poke holes in another cell to induce cell death.

To create CAR-NK cells to treat cancer patients, doctors first take a blood sample from the patient. NK cells are isolated from the sample and engineered to express a protein called a chimeric antigen receptor (CAR), which can be designed to target specific proteins found on cancer cells.

Then, the cells spend several weeks proliferating until there are enough to transfuse back into the patient. A similar approach is also used to create CAR-T cells. Several CAR-T cell therapies have been approved to treat blood cancers such as lymphoma and leukemia, but CAR-NK treatments are still in clinical trials.

Because it takes so long to grow a population of engineered cells that can be infused into the patient, and those cells may not be as viable as cells that came from a healthy person, researchers are exploring an alternative approach: using NK cells from a healthy donor.

Such cells could be grown in large quantities and would be ready whenever they were needed. However, the drawback to these cells is that the recipient’s immune system may see them as foreign and attack them before they can start killing cancer cells.

In the new study, the MIT team set out to find a way to help NK cells “hide” from a patient’s immune system. Through studies of immune cell interactions, they showed that NK cells could evade a host T-cell response if they did not carry surface proteins called HLA class 1 proteins. These proteins, usually expressed on NK cell surfaces, can trigger T cells to attack if the immune system doesn’t recognize them as “self.”

To take advantage of this, the researchers engineered the cells to express a sequence of siRNA (short interfering RNA) that interferes with the genes for HLA class 1. They also delivered the CAR gene, as well as the gene for either PD-L1 or single-chain HLA-E (SCE). PD-L1 and SCE are proteins that make NK cells more effective by turning up genes that are involved in killing cancer cells.

All of these genes can be carried on a single piece of DNA, known as a construct, making it simple to transform donor NK cells into immune-evasive CAR-NK cells. The researchers used this construct to create CAR-NK cells targeting a protein called CD-19, which is often found on cancerous B cells in lymphoma patients.

NK cells unleashed

The researchers tested these CAR-NK cells in mice with a human-like immune system. These mice were also injected with lymphoma cells.

Mice that received CAR-NK cells with the new construct maintained the NK cell population for at least three weeks, and the NK cells were able to nearly eliminate cancer in those mice. In mice that received either NK cells with no genetic modifications or NK cells with only the CAR gene, the host immune cells attacked the donor NK cells. In these mice, the NK cells died out within two weeks, and the cancer spread unchecked.

The researchers also found that these engineered CAR-NK cells were much less likely to induce cytokine release syndrome — a common side effect of immunotherapy treatments, which can cause life-threatening complications.

Because of CAR-NK cells’ potentially better safety profile, Chen anticipates that they could eventually be used in place of CAR-T cells. For any CAR-NK cells that are now in development to target lymphoma or other types of cancer, it should be possible to adapt them by adding the construct developed in this study, he says.

The researchers now hope to run a clinical trial of this approach, working with colleagues at Dana-Farber. They are also working with a local biotech company to test CAR-NK cells to treat lupus, an autoimmune disorder that causes the immune system to attack healthy tissues and organs.

The research was funded, in part, by Skyline Therapeutics, the Koch Institute Frontier Research Program through the Kathy and Curt Marble Cancer Research Fund and the Elisa Rah (2004, 2006) Memorial Fund, the Claudia Adams Barr Foundation, and the Koch Institute Support (core) Grant from the National Cancer Institute.


Laurent Demanet appointed co-director of MIT Center for Computational Science and Engineering

Applied mathematics professor will join fellow co-director Nicolas Hadjiconstantinou in leading the cross-cutting center.


Laurent Demanet, MIT professor of applied mathematics, has been appointed co-director of the MIT Center for Computational Science and Engineering (CCSE), effective Sept. 1.

Demanet, who holds a joint appointment in the departments of Mathematics and Earth, Atmospheric and Planetary Sciences — where he previously served as director of the Earth Resources Laboratory — succeeds Youssef Marzouk, who is now serving as the associate dean of the MIT Schwarzman College of Computing.

Joining co-director Nicolas Hadjiconstantinou, the Quentin Berg (1937) Professor of Mechanical Engineering, Demanet will help lead CCSE, supporting students, faculty, and researchers while fostering a vibrant community of innovation and discovery in computational science and engineering (CSE).

“Laurent’s ability to translate concepts of computational science and engineering into understandable, real-world applications is an invaluable asset to CCSE. His interdisciplinary experience is a benefit to the visibility and impact of CSE research and education. I look forward to working with him,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science.

“I’m pleased to welcome Laurent into his new role as co-director of CCSE. His work greatly supports the cross-cutting methodology at the heart of the computational science and engineering community. I’m excited for CCSE to have a co-director from the School of Science, and eager to see the center continue to broaden its connections across MIT,” says Asu Ozdaglar, deputy dean of the MIT Schwarzman College of Computing, department head of Electrical Engineering and Computer Science, and MathWorks Professor.

Established in 2008, CCSE was incorporated into the MIT Schwarzman College of Computing as one of its core academic units in January 2020. An interdisciplinary research and education center dedicated to pioneering applications of computation, CCSE houses faculty, researchers, and students from a range of MIT schools, such as the schools of Engineering, Science, Architecture and Planning, and the MIT Sloan School of Management, as well as other units of the college.

“I look forward to working with Nicolas and the college leadership on raising the profile of CCSE on campus and globally. We will be pursuing a set of initiatives that span from enhancing the visibility of our research and strengthening our CSE PhD program, to expanding professional education offerings and deepening engagement with our alumni and with industry,” says Demanet.

Demanet’s research lies at the intersection of applied mathematics and scientific computing to visualize the structures beneath Earth’s surface. He also has a strong interest in scientific computing, machine learning, inverse problems, and wave propagation. Through his position as principal investigator of the Imaging and Computing Group, Demanet and his students aim to answer fundamental questions in computational seismic imaging to increase the quality and accuracy of mapping and the projection of changes in Earth’s geological structures. The implications of his work are rooted in environmental monitoring, water resources and geothermal energy, and the understanding of seismic hazards, among others.

He joined the MIT faculty in 2009. He received an Alfred P. Sloan Research Fellowship and the U.S. Air Force Young Investigator Award in 2011, and a CAREER award from the National Science Foundation in 2012. He also held the Class of 1954 Career Development Professorship from 2013 to 2016. Prior to coming to MIT, Demanet held the Szegö Assistant Professorship at Stanford University. He completed his undergraduate studies in mathematical engineering and theoretical physics at Universite de Louvain in Belgium, and earned a PhD in applied and computational mathematics at Caltech, where he was awarded the William P. Carey Prize for best dissertation in the mathematical sciences.


Study sheds light on musicians’ enhanced attention

Brain imaging suggests people with musical training may be better than others at filtering out distracting sounds.


In a world full of competing sounds, we often have to filter out a lot of noise to hear what’s most important. This critical skill may come more easily for people with musical training, according to scientists at MIT’s McGovern Institute for Brain Research, who used brain imaging to follow what happens when people try to focus their attention on certain sounds.

When Cassia Low Manting, a recent MIT postdoc working in the labs of MIT Professor and McGovern Institute PI John Gabrieli and former McGovern Institute PI Dimitrios Pantazis, asked people to focus on a particular melody while another melody played at the same time, individuals with musical backgrounds were, unsurprisingly, better able to follow the target tune. An analysis of study participants’ brain activity suggests this advantage arises because musical training sharpens neural mechanisms that amplify the sounds they want to listen to while turning down distractions. 

“People can hear, understand, and prioritize multiple sounds around them that flow on a moment-to-moment basis,” explains Gabrieli, who is the Grover Hermann Professor of Health Sciences and Technology at MIT. “This study reveals the specific brain mechanisms that successfully process simultaneous sounds on a moment-to-moment basis and promote attention to the most important sounds. It also shows how musical training alters that processing in the mind and brain, offering insight into how experience shapes the way we listen and pay attention.”

The research team, which also included senior author Daniel Lundqvist at the Karolinska Institute in Sweden, reported their open-access findings Sept. 17 in the journal Science Advances. Manting, who is now at the Karolinska Institute, notes that the research is part of an ongoing collaboration between the two institutions.

Overcoming challenges

Participants in the study had vastly difference backgrounds when it came to music. Some were professional musicians with deep training and experience, while others struggled to differentiate between the two tunes they were played, despite each one’s distinct pitch. This disparity allowed the researchers to explore how the brain’s capacity for attention might change with experience. “Musicians are very fun to study because their brains have been morphed in ways based on their training,” Manting says. “It’s a nice model to study these training effects.”

Still, the researchers had significant challenges to overcome. It has been hard to study how the brain manages auditory attention, because when researchers use neuroimaging to monitor brain activity, they see the brain’s response to all sounds: those that the listener cares most about, as well as those the listener is trying to ignore. It is usually difficult to figure out which brain signals were triggered by which sounds.

Manting and her colleagues overcame this challenge with a method called frequency tagging. Rather than playing the melodies in their experiments at a constant volume, the volume of each melody oscillated, rising and falling with a particular frequency. Each melody had its own frequency, creating detectable patterns in the brain signals that responded to it. “When you play these two sounds simultaneously to the subject and you record the brain signal, you can say, this 39-Hertz activity corresponds to the lower-pitch sound and the 43-Hertz activity corresponds specifically to the higher-pitch sound,” Manting explains. “It is very clean and very clear.”

When they paired frequency tagging with magnetoencephalography, a noninvasive method of monitoring brain activity, the team was able to track how their study participants’ brains responded to each of two melodies during their experiments. While the two tunes played, subjects were instructed to follow either the higher-pitched or the lower-pitched melody. When the music stopped, they were asked about the final notes of the target tune: did they rise or did they fall? The researchers could make this task harder by making the two tunes closer together in pitch, as well as by altering the timing of the notes.

Manting used a survey that asked about musical experience to score each participant’s musicality, and this measure had an obvious effect on task performance: The more musical a person was, the more successful they were at following the tune they had been asked to track.

To look for differences in brain activity that might explain this, the research team developed a new machine-learning approach to analyze their data. They used it to tease apart what was happening in the brain as participants focused on the target tune — even, in some cases, when the notes of the distracting tune played at the exact same time.

Top-down versus bottom-up attention

What they found was a clear separation of brain activity associated with two kinds of attention, known as top-down and bottom-up attention. Manting explains that top-down attention is goal-oriented, involving a conscious focus — the kind of attention listeners called on as they followed the target tune. Bottom-up attention, on the other hand, is triggered by the nature of the sound itself. A fire alarm would be expected to trigger this kind of attention, both with its volume and its suddenness. The distracting tune in the team’s experiments triggered activity associated with bottom-up attention — but more so in some people than in others.

“The more musical someone is, the better they are at focusing their top-down selective attention, and the less the effect of bottom-up attention is,” Manting explains.

Manting expects that musicians use their heightened capacity for top-down attention in other situations, as well. For example, they might be better than others at following a conversation in a room filled with background chatter. “I would put my bet on it that there is a high chance that they will be great at zooming into sounds,” she says.

She wonders, however, if one kind of distraction might actually be harder for a musician to filter out: the sound of their own instrument. Manting herself plays both the piano and the Chinese harp, and she says hearing those instruments is “like someone calling my name.” It’s one of many questions about how musical training affects cognition that she plans to explore in her future work.


Matthew Shoulders named head of the Department of Chemistry

A leading researcher in protein folding biochemistry and next-generation protein engineering techniques will advance chemistry research and education.


Matthew D. Shoulders, the Class of 1942 Professor of Chemistry, a MacVicar Faculty Fellow, and an associate member of the Broad Institute of MIT and Harvard, has been named head of the MIT Department of Chemistry, effective Jan. 16, 2026. 

“Matt has made pioneering contributions to the chemistry research community through his research on mechanisms of proteostasis and his development of next-generation techniques to address challenges in biomedicine and agriculture,” says Nergis Mavalvala, dean of the MIT School of Science and the Curtis and Kathleen Marble Professor of Astrophysics. “He is also a dedicated educator, beloved by undergraduates and graduates alike. I know the department will be in good hands as we double down on our commitment to world-leading research and education in the face of financial headwinds.”

Shoulders succeeds Troy Van Voorhis, the Robert T. Haslam and Bradley Dewey Professor of Chemistry, who has been at the helm since October 2019.

“I am tremendously grateful to Troy for his leadership the past six years, building a fantastic community here in our department. We face challenges, but also many exciting opportunities, as a department in the years to come,” says Shoulders. “One thing is certain: Chemistry innovations are critical to solving pressing global challenges. Through the research that we do and the scientists we train, our department has a huge role to play in shaping the future.”

Shoulders studies how cells fold proteins, and he develops ​and applies novel protein engineering techniques to challenges in biotechnology. His work across chemistry and biochemistry fields including proteostasis, extracellular matrix biology, virology, evolution, and synthetic biology is yielding not just important insights into topics like how cells build healthy tissues and how proteins evolve, but also influencing approaches to disease therapy and biotechnology development.

“Matt is an outstanding researcher whose work touches on fundamental questions about how the cell machinery directs the synthesis and folding of proteins. His discoveries about how that machinery breaks down as a result of mutations or in response to stress has a fundamental impact on how we think about and treat human diseases,” says Van Voorhis.

In one part of Matt's current research program, he is studying how protein folding systems in cells — known as chaperones — shape the evolution of their clients. Amongst other discoveries, his lab has shown that viral pathogens hijack human chaperones to enable their rapid evolution and escape from host immunity. In related recent work, they have discovered that these same chaperones can promote access to malignancy-driving mutations in tumors. Beyond fundamental insights into evolutionary biology, these findings hold potential to open new therapeutic strategies to target cancer and viral infections.

“Matt’s ability to see both the details and the big picture makes him an outstanding researcher and a natural leader for the department,” says Timothy Swager, the John D. MacArthur Professor of Chemistry. “MIT Chemistry can only benefit from his dedication to understanding and addressing the parts and the whole.” 

Shoulders also leads a food security project through the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS). Shoulders, along with MIT Research Scientist Robbie Wilson, assembled an interdisciplinary team based at MIT to enhance climate resilience in agriculture by improving one of the most inefficient aspects of photosynthesis, the carbon dioxide-fixing plant enzyme RuBisCO. J-WAFS funded this high-risk, high-reward MIT Grand Challenge project in 2023, and it has received further support from federal research agencies and the Grantham Foundation for the Protection of the Environment. 

“Our collaborative team of biochemists and synthetic biologists, computational biologists, and chemists is deeply integrated with plant biologists, creating a robust feedback loop for enzyme engineering,” Shoulders says. “Together, this team is making a concerted effort using state-of-the-art techniques to engineer crop RuBisCO with an eye to helping make meaningful gains in securing a stable crop supply, hopefully with accompanying improvements in both food and water security.”

In addition to his research contributions, Shoulders has taught multiple classes for Course V, including 5.54 (Advances in Chemical Biology) and 5.111 (Principles of Chemical Science), along with a number of other key chemistry classes. His contributions to a 5.111 “bootcamp” through the MITx platform served to address gaps in the classroom curriculum by providing online tools to help undergraduate students better grasp the material in the chemistry General Institute Requirement (GIR). His development of Guided Learning Demonstrations to support first-year chemistry courses at MIT has helped bring the lab to the GIR, and also contributed to the popularity of 5.111 courses offered regularly via MITx.

“I have had the pleasure of teaching with Matt on several occasions, and he is a fantastic educator. He is an innovator both inside and outside the classroom and has an unwavering commitment to his students’ success,” says Van Voorhis of Shoulders, who was named a 2022 MacVicar Faculty Fellow, and who received a Committed to Caring award through the Office of Graduate Education.

Shoulders also founded the MIT Homeschool Internship Program for Science and Technology, which brings high school students to campus for paid summer research experiences in labs across the Institute.

He is a founding member of the Department of Chemistry’s Quality of Life Committee and chair for the last six years, helping to improve all aspects of opportunity, professional development, and experience in the department: “countless changes that have helped make MIT a better place for all,” as Van Voorhis notes, including creating a peer mentoring program for graduate students and establishing universal graduate student exit interviews to collect data for department-wide assessment and improvement.

At the Institute level, Shoulders has served on the Committee on Graduate Programs, Committee on Sexual Misconduct Prevention and Response (in which he co-chaired the provost's working group on the Faculty and Staff Sexual Misconduct Survey), and the Committee on Assessment of Biohazards and Embryonic Stem Cell Research Oversight, among other roles.

Shoulders graduated summa cum laude from Virginia Tech in 2004, earning a BS in chemistry with a minor in biochemistry. He earned a PhD in chemistry at the University of Wisconsin at Madison in 2009 under Professor Ronald Raines. Following an American Cancer Society Postdoctoral Fellowship at Scripps Research Institute, working with professors Jeffery Kelly and Luke Wiseman, Shoulders joined the MIT Department of Chemistry faculty as an assistant professor in 2012. Shoulders also serves as an associate member of the Broad Institute and an investigator at the Center for Musculoskeletal Research at Massachusetts General Hospital.

Among his many awards, Shoulders has received a NIH Director's New Innovator Award under the NIH High-Risk, High-Reward Research Program; an NSF CAREER Award; an American Cancer Society Research Scholar Award; the Camille Dreyfus Teacher-Scholar Award; and most recently the Ono Pharma Foundation Breakthrough Science Award.


Chemists create red fluorescent dyes that may enable clearer biomedical imaging

The new dyes are based on boron-containing molecules that were previously too unstable for practical use.


MIT chemists have designed a new type of fluorescent molecule that they hope could be used for applications such as generating clearer images of tumors.

The new dye is based on a borenium ion — a positively charged form of boron that can emit light in the red to near-infrared range. Until recently, these ions have been too unstable to be used for imaging or other biomedical applications.

In a study appearing today in Nature Chemistry, the researchers showed that they could stabilize borenium ions by attaching them to a ligand. This approach allowed them to create borenium-containing films, powders, and crystals, all of which emit and absorb light in the red and near-infrared range.

That is important because near-IR light is easier to see when imaging structures deep within tissues, which could allow for clearer images of tumors and other structures in the body.

“One of the reasons why we focus on red to near-IR is because those types of dyes penetrate the body and tissue much better than light in the UV and visible range. Stability and brightness of those red dyes are the challenges that we tried to overcome in this study,” says Robert Gilliard, the Novartis Professor of Chemistry at MIT and the senior author of the study.

MIT research scientist Chun-Lin Deng is the lead author of the paper. Other authors include Bi Youan (Eric) Tra PhD ’25, former visiting graduate student Xibao Zhang, and graduate student Chonghe Zhang.

Stabilized borenium

Most fluorescent imaging relies on dyes that emit blue or green light. Those imaging agents work well in cells, but they are not as useful in tissue because low levels of blue and green fluorescence produced by the body interfere with the signal. Blue and green light also scatters in tissue, limiting how deeply it can penetrate.

Imaging agents that emit red fluorescence can produce clearer images, but most red dyes are inherently unstable and don’t produce a bright signal, because of their low quantum yields (the ratio of fluorescent photons emitted per photon of light is absorbed). For many red dyes, the quantum yield is only about 1 percent.

Among the molecules that can emit near-infrared light are borenium cations —positively charged ions containing an atom of boron attached to three other atoms.

When these molecules were first discovered in the mid-1980s, they were considered “laboratory curiosities,” Gilliard says. These molecules were so unstable that they had to be handled in a sealed container called a glovebox to protect them from exposure to air, which can lead them to break down.

Later, chemists realized they could make these ions more stable by attaching them to molecules called ligands. Working with these more stable ions, Gillliard’s lab discovered in 2019 that they had some unusual properties: Namely, they could respond to changes in temperature by emitting different colors of light.

However, at that point, “there was a substantial problem in that they were still too reactive to be handled in open air,” Gilliard says.

His lab began working on new ways to further stabilize them using ligands known as carbodicarbenes (CDCs), which they reported in a 2022 study. Due to this stabilization, the compounds can now be studied and handled without using a glovebox. They are also resistant to being broken down by light, unlike many previous borenium-based compounds.

In the new study, Gilliard began experimenting with the anions (negatively charged ions) that are a part of the CDC-borenium compounds. Interactions between these anions and the borenium cation generate a phenomenon known as exciton coupling, the researchers discovered. This coupling, they found, shifted the molecules’ emission and absorption properties toward the infrared end of the color spectrum. These molecules also generated a high quantum yield, allowing them to shine more brightly.

“Not only are we in the correct region, but the efficiency of the molecules is also very suitable,” Gilliard says. “We’re up to percentages in the thirties for the quantum yields in the red region, which is considered to be high for that region of the electromagnetic spectrum.”

Potential applications

The researchers also showed that they could convert their borenium-containing compounds into several different states, including solid crystals, films, powders, and colloidal suspensions.

For biomedical imaging, Gilliard envisions that these borenium-containing materials could be encapsulated in polymers, allowing them to be injected into the body to use as an imaging dye. As a first step, his lab plans to work with researchers in the chemistry department at MIT and at the Broad Institute of MIT and Harvard to explore the potential of imaging these materials within cells.

Because of their temperature responsiveness, these materials could also be deployed as temperature sensors, for example, to monitor whether drugs or vaccines have been exposed to temperatures that are too high or low during shipping.

“For any type of application where temperature tracking is important, these types of ‘molecular thermometers’ can be very useful,” Gilliard says.

If incorporated into thin films, these molecules could also be useful as organic light-emitting diodes (OLEDs), particularly in new types of materials such as flexible screens, Gilliard says.

“The very high quantum yields achieved in the near-IR, combined with the excellent environmental stability, make this class of compounds extremely interesting for biological applications,” says Frieder Jaekle, a professor of chemistry at Rutgers University, who was not involved in the study. “Besides the obvious utility in bioimaging, the strong and tunable near-IR emission also makes these new fluorophores very appealing as smart materials for anticounterfeiting, sensors, switches, and advanced optoelectronic devices.”

In addition to exploring possible applications for these dyes, the researchers are now working on extending their color emission further into the near-infrared region, which they hope to achieve by incorporating additional boron atoms. Those extra boron atoms could make the molecules less stable, so the researchers are also working on new types of carbodicarbenes to help stabilize them.

The research was funded by the Arnold and Mabel Beckman Foundation and the National Institutes of Health.


MIT-affiliated physicists win McMillan Award for discovery of exotic electronic state

Jiaqi Cai and Zhengguang Lu independently discovered that electrons can become fractions of themselves.


Last year, MIT physicists reported in the journal Nature that electrons can become fractions of themselves in graphene, an atomically thin form of carbon. This exotic electronic state, called the fractional quantum anomalous Hall effect (FQAHE), could enable more robust forms of quantum computing.

Now two young MIT-affiliated physicists involved in the discovery of FQAHE have been named the 2025 recipients of the McMillan Award from the University of Illinois for their work. Jiaqi Cai and Zhengguang Lu won the award “for the discovery of fractional anomalous quantum hall physics in 2D moiré materials.”

Cai is currently a Pappalardo Fellow at MIT working with Pablo Jarillo-Herrero, the Cecil and Ida Green Professor of Physics, and collaborating with several other labs at MIT including Long Ju, the Lawrence and Sarah W. Biedenharn Career Development Associate Professor in the MIT Department of Physics. He discovered FQAHE while working in the laboratory of Professor Xiaodong Xu at the University of Washington.

Lu discovered FQAHE while working as a postdoc Ju's lab and has since become an assistant professor at Florida State University.

The two independent discoveries were made in the same year.
 
“The McMillan award is the highest honor that a young condensed matter physicist can receive,” says Ju. “My colleagues and I in the Condensed Matter Experiment and the Condensed Matter Theory Group are very proud of Zhengguang and Jiaqi.” 

Ju and Jarillo-Herrero are both also affiliated with the Materials Research Laboratory. 

In addition to a monetary prize and a plaque, Lu and Cai will give a colloquium on their work at the University of Illinois this fall.


A simple formula could guide the design of faster-charging, longer-lasting batteries

MIT researchers developed a model that explains lithium intercalation rates in lithium-ion batteries.


At the heart of all lithium-ion batteries is a simple reaction: Lithium ions dissolved in an electrolyte solution “intercalate” or insert themselves into a solid electrode during battery discharge. When they de-intercalate and return to the electrolyte, the battery charges.

This process happens thousands of times throughout the life of a battery. The amount of power that the battery can generate, and how quickly it can charge, depend on how fast this reaction happens. However, little is known about the exact mechanism of this reaction, or the factors that control its rate.

In a new study, MIT researchers have measured lithium intercalation rates in a variety of different battery materials and used that data to develop a new model of how the reaction is controlled. Their model suggests that lithium intercalation is governed by a process known as coupled ion-electron transfer, in which an electron is transferred to the electrode along with a lithium ion.

Insights gleaned from this model could guide the design of more powerful and faster charging lithium-ion batteries, the researchers say.

“What we hope is enabled by this work is to get the reactions to be faster and more controlled, which can speed up charging and discharging,” says Martin Bazant, the Chevron Professor of Chemical Engineering and a professor of mathematics at MIT.

The new model may also help scientists understand why tweaking electrodes and electrolytes in certain ways leads to increased energy, power, and battery life — a process that has mainly been done by trial and error.

“This is one of these papers where now we began to unify the observations of reaction rates that we see with different materials and interfaces, in one theory of coupled electron and ion transfer for intercalation, building up previous work on reaction rates,” says Yang Shao-Horn, the J.R. East Professor of Engineering at MIT and a professor of mechanical engineering, materials science and engineering, and chemistry.

Shao-Horn and Bazant are the senior authors of the paper, which appears today in Science. The paper’s lead authors are Yirui Zhang PhD ’22, who is now an assistant professor at Rice University; Dimitrios Fraggedakis PhD ’21, who is now an assistant professor at Princeton University; Tao Gao, a former MIT postdoc who is now an assistant professor at the University of Utah; and MIT graduate student Shakul Pathak.

Modeling lithium flow

For many decades, scientists have hypothesized that the rate of lithium intercalation at a lithium-ion battery electrode is determined by how quickly lithium ions can diffuse from the electrolyte into the electrode. This reaction, they believed, was governed by a model known as the Butler-Volmer equation, originally developed almost a century ago to describe the rate of charge transfer during an electrochemical reaction.

However, when researchers have tried to measure lithium intercalation rates, the measurements they obtained were not always consistent with the rates predicted by the Butler-Volmer equation. Furthermore, obtaining consistent measurements across labs has been difficult, with different research teams reporting measurements for the same reaction that varied by a factor of up to 1 billion.

In the new study, the MIT team measured lithium intercalation rates using an electrochemical technique that involves applying repeated, short bursts of voltage to an electrode. They generated these measurements for more than 50 combinations of electrolytes and electrodes, including lithium nickel manganese cobalt oxide, which is commonly used in electric vehicle batteries, and lithium cobalt oxide, which is found in the batteries that power most cell phones, laptops, and other portable electronics.

For these materials, the measured rates are much lower than has previously been reported, and they do not correspond to what would be predicted by the traditional Butler-Volmer model.

The researchers used the data to come up with an alternative theory of how lithium intercalation occurs at the surface of an electrode. This theory is based on the assumption that in order for a lithium ion to enter an electrode, an electron from the electrolyte solution must be transferred to the electrode at the same time.

“The electrochemical step is not lithium insertion, which you might think is the main thing, but it’s actually electron transfer to reduce the solid material that is hosting the lithium,” Bazant says. “Lithium is intercalated at the same time that the electron is transferred, and they facilitate one another.”

This coupled-electron ion transfer (CIET) lowers the energy barrier that must be overcome for the intercalation reaction to occur, making it more likely to happen. The mathematical framework of CIET allowed the researchers to make reaction rate predictions, which were validated by their experiments and substantially different from those made by the Butler-Volmer model.

Faster charging

In this study, the researchers also showed that they could tune intercalation rates by changing the composition of the electrolyte. For example, swapping in different anions can lower the amount of energy needed to transfer the lithium and electron, making the process more efficient.

“Tuning the intercalation kinetics by changing electrolytes offers great opportunities to enhance the reaction rates, alter electrode designs, and therefore enhance the battery power and energy,” Shao-Horn says.

Shao-Horn’s lab and their collaborators have been using automated experiments to make and test thousands of different electrolytes, which are used to develop machine-learning models to predict electrolytes with enhanced functions.

The findings could also help researchers to design batteries that would charge faster, by speeding up the lithium intercalation reaction. Another goal is reducing the side reactions that can cause battery degradation when electrons are picked off the electrode and dissolve into the electrolyte.

“If you want to do that rationally, not just by trial and error, you need some kind of theoretical framework to know what are the important material parameters that you can play with,” Bazant says. “That’s what this paper tries to provide.”

The research was funded by Shell International Exploration and Production and the Toyota Research Institute through the D3BATT Center for Data-Driven Design of Rechargeable Batteries.


A cysteine-rich diet may promote regeneration of the intestinal lining, study suggests

The findings may offer a new way to help heal tissue damage from radiation or chemotherapy treatment.


A diet rich in the amino acid cysteine may have rejuvenating effects in the small intestine, according to a new study from MIT. This amino acid, the researchers discovered, can turn on an immune signaling pathway that helps stem cells to regrow new intestinal tissue.

This enhanced regeneration may help to heal injuries from radiation, which often occur in patients undergoing radiation therapy for cancer. The research was conducted in mice, but if future research shows similar results in humans, then delivering elevated quantities of cysteine, through diet or supplements, could offer a new strategy to help damaged tissue heal faster, the researchers say.

“The study suggests that if we give these patients a cysteine-rich diet or cysteine supplementation, perhaps we can dampen some of the chemotherapy or radiation-induced injury,” says Omer Yilmaz, director of the MIT Stem Cell Initiative, an associate professor of biology at MIT, and a member of MIT’s Koch Institute for Integrative Cancer Research. “The beauty here is we’re not using a synthetic molecule; we’re exploiting a natural dietary compound.”

While previous research has shown that certain types of diets, including low-calorie diets, can enhance intestinal stem cell activity, the new study is the first to identify a single nutrient that can help intestinal cells to regenerate.

Yilmaz is the senior author of the study, which appears today in Nature. Koch Institute postdoc Fangtao Chi is the paper’s lead author.

Boosting regeneration

It is well-established that diet can affect overall health: High-fat diets can lead to obesity, diabetes, and other health problems, while low-calorie diets have been shown to extend lifespans in many species. In recent years, Yilmaz’s lab has investigated how different types of diets influence stem cell regeneration, and found that high-fat diets, as well as short periods of fasting, can enhance stem cell activity in different ways.

“We know that macro diets such as high-sugar diets, high-fat diets, and low-calorie diets have a clear impact on health. But at the granular level, we know much less about how individual nutrients impact stem cell fate decisions, as well as tissue function and overall tissue health,” Yilmaz says.

In their new study, the researchers began by feeding mice a diet high in one of 20 different amino acids, the building blocks of proteins. For each group, they measured how the diet affected intestinal stem cell regeneration. Among these amino acids, cysteine had the most dramatic effects on stem cells and progenitor cells (immature cells that differentiate into adult intestinal cells).

Further studies revealed that cysteine initiates a chain of events leading to the activation of a population of immune cells called CD8 T cells. When cells in the lining of the intestine absorb cysteine from digested food, they convert it into CoA, a cofactor that is released into the mucosal lining of the intestine. There, CD8 T cells absorb CoA, which stimulates them to begin proliferating and producing a cytokine called IL-22.

IL-22 is an important player in the regulation of intestinal stem cell regeneration, but until now, it wasn’t known that CD8 T cells can produce it to boost intestinal stem cells. Once activated, those IL-22-releasing T cells are primed to help combat any kind of injury that could occur within the intestinal lining.

“What’s really exciting here is that feeding mice a cysteine-rich diet leads to the expansion of an immune cell population that we typically don’t associate with IL-22 production and the regulation of intestinal stemness,” Yilmaz says. “What happens in a cysteine-rich diet is that the pool of cells that make IL-22 increases, particularly the CD8 T-cell fraction.”

These T cells tend to congregate within the lining of the intestine, so they are already in position when needed. The researchers found that the stimulation of CD8 T cells occurred primarily in the small intestine, not in any other part of the digestive tract, which they believe is because most of the protein that we consume is absorbed by the small intestine.

Healing the intestine

In this study, the researchers showed that regeneration stimulated by a cysteine-rich diet could help to repair radiation damage to the intestinal lining. Also, in work that has not been published yet, they showed that a high-cysteine diet had a regenerative effect following treatment with a chemotherapy drug called 5-fluorouracil. This drug, which is used to treat colon and pancreatic cancers, can also damage the intestinal lining.

Cysteine is found in many high-protein foods, including meat, dairy products, legumes, and nuts. The body can also synthesize its own cysteine, by converting the amino acid methionine to cysteine — a process that takes place in the liver. However, cysteine produced in the liver is distributed through the entire body and doesn’t lead to a buildup in the small intestine the way that consuming cysteine in the diet does.

“With our high-cysteine diet, the gut is the first place that sees a high amount of cysteine,” Chi says.

Cysteine has been previously shown to have antioxidant effects, which are also beneficial, but this study is the first to demonstrate its effect on intestinal stem cell regeneration. The researchers now hope to study whether it may also help other types of stem cells regenerate new tissues. In one ongoing study, they are investigating whether cysteine might stimulate hair follicle regeneration.

They also plan to further investigate some of the other amino acids that appear to influence stem cell regeneration.

“I think we’re going to uncover multiple new mechanisms for how these amino acids regulate cell fate decisions and gut health in the small intestine and colon,” Yilmaz says.

The research was funded, in part, by the National Institutes of Health, the V Foundation, the Koch Institute Frontier Research Program via the Kathy and Curt Marble Cancer Research Fund, the Bridge Project — a partnership between the Koch Institute for Integrative Cancer Research at MIT and the Dana-Farber/Harvard Cancer Center, the American Federation for Aging Research, the MIT Stem Cell Initiative, and the Koch Institute Support (core) Grant from the National Cancer Institute.


MIT cognitive scientists reveal why some sentences stand out from others

Sentences that are highly dissimilar from anything we’ve seen before are more likely to be remembered accurately.


“You still had to prove yourself.”

“Every cloud has a blue lining!”

Which of those sentences are you most likely to remember a few minutes from now? If you guessed the second, you’re probably correct.

According to a new study from MIT cognitive scientists, sentences that stick in your mind longer are those that have distinctive meanings, making them stand out from sentences you’ve previously seen. They found that meaning, not any other trait, is the most important feature when it comes to memorability.

“One might have thought that when you remember sentences, maybe it’s all about the visual features of the sentence, but we found that that was not the case. A big contribution of this paper is pinning down that it is the meaning-related space that makes sentences memorable,” says Greta Tuckute PhD ’25, who is now a research fellow at Harvard University’s Kempner Institute.

The findings support the hypothesis that sentences with distinctive meanings — like “Does olive oil work for tanning?” — are stored in brain space that is not cluttered with sentences that mean almost the same thing. Sentences with similar meanings end up densely packed together and are therefore more difficult to recognize confidently later on, the researchers believe.

“When you encode sentences that have a similar meaning, there’s feature overlap in that space. Therefore, a particular sentence you’ve encoded is not linked to a unique set of features, but rather to a whole bunch of features that may overlap with other sentences,” says Evelina Fedorenko, an MIT associate professor of brain and cognitive sciences (BCS), a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Tuckute and Thomas Clark, an MIT graduate student, are the lead authors of the paper, which appears in the Journal of Memory and Language. MIT graduate student Bryan Medina is also an author.

Distinctive sentences

What makes certain things more memorable than others is a longstanding question in cognitive science and neuroscience. In a 2011 study, Aude Oliva, now a senior research scientist at MIT and MIT director of the MIT-IBM Watson AI Lab, showed that not all items are created equal: Some types of images are much easier to remember than others, and people are remarkably consistent in what images they remember best.

In that study, Oliva and her colleagues found that, in general, images with people in them are the most memorable, followed by images of human-scale space and close-ups of objects. Least memorable are natural landscapes.

As a follow-up to that study, Fedorenko and Oliva, along with Ted Gibson, another faculty member in BCS, teamed up to determine if words also vary in their memorability. In a study published earlier this year, co-led by Tuckute and Kyle Mahowald, a former PhD student in BCS, the researchers found that the most memorable words are those that have the most distinctive meanings.

Words are categorized as being more distinctive if they have a single meaning, and few or no synonyms — for example, words like “pineapple” or “avalanche” which were found to be very memorable. On the other hand, words that can have multiple meanings, such as “light,” or words that have many synonyms, like “happy,” were more difficult for people to recognize accurately.

In the new study, the researchers expanded their scope to analyze the memorability of sentences. Just like words, some sentences have very distinctive meanings, while others communicate similar information in slightly different ways.

To do the study, the researchers assembled a collection of 2,500 sentences drawn from publicly available databases that compile text from novels, news articles, movie dialogues, and other sources. Each sentence that they chose contained exactly six words.

The researchers then presented a random selection of about 1,000 of these sentences to each study participant, including repeats of some sentences. Each of the 500 participants in the study was asked to press a button when they saw a sentence that they remembered seeing earlier.

The most memorable sentences — the ones where participants accurately and quickly indicated that they had seen them before — included strings such as “Homer Simpson is hungry, very hungry,” and “These mosquitoes are — well, guinea pigs.”

Those memorable sentences overlapped significantly with sentences that were determined as having distinctive meanings as estimated through the high-dimensional vector space of a large language model (LLM) known as Sentence BERT. That model is able to generate sentence-level representations of sentences, which can be used for tasks like judging meaning similarity between sentences. This model provided researchers with a distinctness score for each sentence based on its semantic similarity to other sentences.

The researchers also evaluated the sentences using a model that predicts memorability based on the average memorability of the individual words in the sentence. This model performed fairly well at predicting overall sentence memorability, but not as well as Sentence BERT. This suggests that the meaning of a sentence as a whole — above and beyond the contributions from individual words — determines how memorable it will be, the researchers say.

Noisy memories

While cognitive scientists have long hypothesized that the brain’s memory banks have a limited capacity, the findings of the new study support an alternative hypothesis that would help to explain how the brain can continue forming new memories without losing old ones.

This alternative, known as the noisy representation hypothesis, says that when the brain encodes a new memory, be it an image, a word, or a sentence, it is represented in a noisy way — that is, this representation is not identical to the stimulus, and some information is lost. For example, for an image, you may not encode the exact viewing angle at which an object is shown, and for a sentence, you may not remember the exact construction used.

Under this theory, a new sentence would be encoded in a similar part of the memory space as sentences that carry a similar meanings, whether they were encountered recently or sometime across a lifetime of language experience. This jumbling of similar meanings together increases the amount of noise and can make it much harder, later on, to remember the exact sentence you have seen before.

“The representation is gradually going to accumulate some noise. As a result, when you see an image or a sentence for a second time, your accuracy at judging whether you’ve seen it before will be affected, and it’ll be less than 100 percent in most cases,” Clark says.

However, if a sentence has a unique meaning that is encoded in a less densely crowded space, it will be easier to pick out later on.

“Your memory may still be noisy, but your ability to make judgments based on the representations is less affected by that noise because the representation is so distinctive to begin with,” Clark says.

The researchers now plan to study whether other features of sentences, such as more vivid and descriptive language, might also contribute to making them more memorable, and how the language system may interact with the hippocampal memory structures during the encoding and retrieval of memories.

The research was funded, in part, by the National Institutes of Health, the McGovern Institute, the Department of Brain and Cognitive Sciences, the Simons Center for the Social Brain, and the MIT Quest for Intelligence.


MIT joins in constructing the Giant Magellan Telescope

The major public-private partnership is expected to strengthen MIT research and US leadership in astronomy and engineering.


The following article is adapted from a joint press release issued today by MIT and the Giant Magellan Telescope.

MIT is lending its support to the Giant Magellan Telescope, joining the international consortium to advance the $2.6 billion observatory in Chile. The Institute’s participation, enabled by a transformational gift from philanthropists Phillip (Terry) Ragon ’72 and Susan Ragon, adds to the momentum to construct the Giant Magellan Telescope, whose 25.4-meter aperture will have five times the light-collecting area and up to 200 times the power of existing observatories.

“As philanthropists, Terry and Susan have an unerring instinct for finding the big levers: those interventions that truly transform the scientific landscape,” says MIT President Sally Kornbluth. “We saw this with their founding of the Ragon Institute, which pursues daring approaches to harnessing the immune system to prevent and cure human diseases. With today’s landmark gift, the Ragons enable an equally lofty mission to better understand the universe — and we could not be more grateful for their visionary support."

MIT will be the 16th member of the international consortium advancing the Giant Magellan Telescope and the 10th participant based in the United States. Together, the consortium has invested $1 billion in the observatory — the largest-ever private investment in ground-based astronomy. The Giant Magellan Telescope is already 40 percent under construction, with major components being designed and manufactured across 36 U.S. states.

“MIT is honored to join the consortium and participate in this exceptional scientific endeavor,” says Ian A. Waitz, MIT’s vice president for research. “The Giant Magellan Telescope will bring tremendous new capabilities to MIT astronomy and to U.S. leadership in fundamental science. The construction of this uniquely powerful telescope represents a vital private and public investment in scientific excellence for decades to come.”

MIT brings to the consortium powerful scientific capabilities and a legacy of astronomical excellence. MIT’s departments of Physics and of Earth, Atmospheric and Planetary Sciences, and the MIT Kavli Institute for Astrophysics and Space Research, are internationally recognized for research in exoplanets, cosmology, and environments of extreme gravity, such as black holes and compact binary stars. MIT’s involvement will strengthen the Giant Magellan Telescope’s unique capabilities in high-resolution spectroscopy, adaptive optics, and the search for life beyond Earth. It also deepens a long-standing scientific relationship: MIT is already a partner in the existing twin Magellan Telescopes at Las Campanas Observatory in Chile — one of the most scientifically valuable observing sites on Earth, and the same site where the Giant Magellan Telescope is now under construction.

“Since Galileo’s first spyglass, the world’s largest telescope has doubled in aperture every 40 to 50 years,” says Robert A. Simcoe, director of the MIT Kavli Institute and the Francis L. Friedman Professor of Physics. “Each generation’s leading instruments have resolved important scientific questions of the day and then surprised their builders with new discoveries not yet even imagined, helping humans understand our place in the universe. Together with the Giant Magellan Telescope, MIT is helping to realize our generation’s contribution to this lineage, consistent with our mission to advance the frontier of fundamental science by undertaking the most audacious and advanced engineering challenges.”

Contributing to the national strategy

MIT’s support comes at a pivotal time for the observatory. In June 2025, the National Science Foundation (NSF) advanced the Giant Magellan Telescope into its Final Design Phase, one of the final steps before it becomes eligible for federal construction funding. To demonstrate readiness and a strong commitment to U.S. leadership, the consortium offered to privately fund this phase, which is traditionally supported by the NSF.

MIT’s investment is an integral part of the national strategy to secure U.S. access to the next generation of research facilities known as “extremely large telescopes.” The Giant Magellan Telescope is a core partner in the U.S. Extremely Large Telescope Program, the nation’s top priority in astronomy. The National Academies’ Astro2020 Decadal Survey called the program “absolutely essential if the United States is to maintain a position as a leader in ground-based astronomy.” This long-term strategy also includes the recently commissioned Vera C. Rubin Observatory in Chile. Rubin is scanning the sky to detect rare, fast-changing cosmic events, while the Giant Magellan Telescope will provide the sensitivity, resolution, and spectroscopic instruments needed to study them in detail. Together, these Southern Hemisphere observatories will give U.S. scientists the tools they need to lead 21st-century astrophysics.

“Without direct access to the Giant Magellan Telescope, the U.S. risks falling behind in fundamental astronomy, as Rubin’s most transformational discoveries will be utilized by other nations with access to their own ‘extremely large telescopes’ under development,” says Walter Massey, board chair of the Giant Magellan Telescope.

MIT’s participation brings the United States a step closer to completing the promise of this powerful new observatory on a globally competitive timeline. With federal construction funding, it is expected that the observatory could reach 90 percent completion in less than two years and become operational by the 2030s.

“MIT brings critical expertise and momentum at a time when global leadership in astronomy hangs in the balance,” says Robert Shelton, president of the Giant Magellan Telescope. “With MIT, we are not just adding a partner; we are accelerating a shared vision for the future and reinforcing the United States’ position at the forefront of science.”

Other members of the Giant Magellan Telescope consortium include the University of Arizona, Carnegie Institution for Science, The University of Texas at Austin, Korea Astronomy and Space Science Institute, University of Chicago, São Paulo Research Foundation (FAPESP), Texas A&M University, Northwestern University, Harvard University, Astronomy Australia Ltd., Australian National University, Smithsonian Institution, Weizmann Institute of Science, Academia Sinica Institute of Astronomy and Astrophysics, and Arizona State University.

A boon for astrophysics research and education

Access to the world’s best optical telescopes is a critical resource for MIT researchers. More than 150 individual science programs at MIT have relied on major astronomical observatories in the past three years, engaging faculty, researchers, and students in investigations into the marvels of the universe. Recent research projects have included chemical studies of the universe’s oldest stars, led by Professor Anna Frebel; spectroscopy of stars shredded by dormant black holes, led by Professor Erin Kara; and measurements of a white dwarf teetering on the precipice of a black hole, led by Professor Kevin Burdge. 

“Over many decades, researchers at the MIT Kavli Institute have used unparalleled instruments to discover previously undetected cosmic phenomena from both ground-based observations and spaceflight missions,” says Nergis Mavalvala, dean of the MIT School of Science and the Curtis (1963) and Kathleen Marble Professor of Astrophysics. “I have no doubt our brilliant colleagues will carry on that tradition with the Giant Magellan Telescope, and I can’t wait to see what they will discover next.”

The Giant Magellan Telescope will also provide a platform for advanced R&D in remote sensing, creating opportunities to build custom infrared and optical spectrometers and high-speed imagers to further study our universe.

“One cannot have a leading physics program without a leading astrophysics program. Access to time on the Giant Magellan Telescope will ensure that future generations of MIT researchers will continue to work at the forefront of astrophysical discovery for decades to come,” says Deepto Chakrabarty, head of the MIT Department of Physics, the William A. M. Burden Professor in Astrophysics, and principal investigator at the MIT Kavli Institute. “Our institutional access will help attract and retain top researchers in astrophysics, planetary science, and advanced optics, and will give our PhD students and postdocs unrivaled educational opportunities.”


The first animals on Earth may have been sea sponges, study suggests

MIT researchers traced chemical fossils in ancient rocks to the ancestors of modern-day demosponges.


A team of MIT geochemists has unearthed new evidence in very old rocks suggesting that some of the first animals on Earth were likely ancestors of the modern sea sponge.

In a study appearing today in the Proceedings of the National Academy of Sciences, the researchers report that they have identified “chemical fossils” that may have been left by ancient sponges in rocks that are more than 541 million years old. A chemical fossil is a remnant of a biomolecule that originated from a living organism that has since been buried, transformed, and preserved in sediment, sometimes for hundreds of millions of years.

The newly identified chemical fossils are special types of steranes, which are the geologically stable form of sterols, such as cholesterol, that are found in the cell membranes of complex organisms. The researchers traced these special steranes to a class of sea sponges known as demosponges. Today, demosponges come in a huge variety of sizes and colors, and live throughout the oceans as soft and squishy filter feeders. Their ancient counterparts may have shared similar characteristics.

“We don’t know exactly what these organisms would have looked like back then, but they absolutely would have lived in the ocean, they would have been soft-bodied, and we presume they didn’t have a silica skeleton,” says Roger Summons, the Schlumberger Professor of Geobiology Emeritus in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS).

The group’s discovery of sponge-specific chemical fossils offers strong evidence that the ancestors of demosponges were among the first animals to evolve, and that they likely did so much earlier than the rest of Earth’s major animal groups.

The study’s authors, including Summons, are lead author and former MIT EAPS Crosby Postdoctoral Fellow Lubna Shawar, who is now a research scientist at Caltech, along with Gordon Love from the University of California at Riverside, Benjamin Uveges of Cornell University, Alex Zumberge of GeoMark Research in Houston, Paco Cárdenas of Uppsala University in Sweden, and José-Luis Giner of the State University of New York College of Environmental Science and Forestry.

Sponges on steroids

The new study builds on findings that the group first reported in 2009. In that study, the team identified the first chemical fossils that appeared to derive from ancient sponges. They analyzed rock samples from an outcrop in Oman and found a surprising abundance of steranes that they determined were the preserved remnants of 30-carbon (C30) sterols — a rare form of steroid that they showed was likely derived from ancient sea sponges.

The steranes were found in rocks that were very old and formed during the Ediacaran Period — which spans from roughly 541 million to about 635 million years ago. This period took place just before the Cambrian, when the Earth experienced a sudden and global explosion of complex multicellular life. The team’s discovery suggested that ancient sponges appeared much earlier than most multicellular life, and were possibly one of Earth’s first animals.

However, soon after these findings were released, alternative hypotheses swirled to explain the C30 steranes’ origins, including that the chemicals could have been generated by other groups of organisms or by nonliving geological processes.

The team says the new study reinforces their earlier hypothesis that ancient sponges left behind this special chemical record, as they have identified a new chemical fossil in the same Precambrian rocks that is almost certainly biological in origin.

Building evidence

Just as in their previous work, the researchers looked for chemical fossils in rocks that date back to the Ediacaran Period. They acquired samples from drill cores and outcrops in Oman, western India, and Siberia, and analyzed the rocks for signatures of steranes, the geologically stable form of sterols found in all eukaryotes (plants, animals, and any organism with a nucleus and membrane-bound organelles).

“You’re not a eukaryote if you don’t have sterols or comparable membrane lipids,” Summons says.

A sterol’s core structure consists of four fused carbon rings. Additional carbon side chain and chemical add-ons can attach to and extend a sterol’s structure, depending on what an organism’s particular genes can produce. In humans, for instance, the sterol cholesterol contains 27 carbon atoms, while the sterols in plants generally have 29 carbon atoms.

“It’s very unusual to find a sterol with 30 carbons,” Shawar says.

The chemical fossil the researchers identified in 2009 was a 30-carbon sterol. What’s more, the team determined that the compound could be synthesized because of the presence of a distinctive enzyme which is encoded by a gene that is common to demosponges.

In their new study, the team focused on the chemistry of these compounds and realized the same sponge-derived gene could produce an even rarer sterol, with 31 carbon atoms (C31). When they analyzed their rock samples for C31 steranes, they found it in surprising abundance, along with the aforementioned C30 steranes.

“These special steranes were there all along,” Shawar says. “It took asking the right questions to seek them out and to really understand their meaning and from where they come.”

The researchers also obtained samples of modern-day demosponges and analyzed them for C31 sterols. They found that, indeed, the sterols — biological precursors of the C31 steranes found in rocks — are present in some species of contemporary demosponges. Going a step further, they chemically synthesized eight different C31 sterols in the lab as reference standards to verify their chemical structures. Then, they processed the molecules in ways that simulate how the sterols would change when deposited, buried, and pressurized over hundreds of millions of years. They found that the products of only two such sterols were an exact match with the form of C31 sterols that they found in ancient rock samples. The presence of two and the absence of the other six demonstrates that these compounds were not produced by a random nonbiological process.

The findings, reinforced by multiple lines of inquiry, strongly support the idea that the steranes that were found in ancient rocks were indeed produced by living organisms, rather than through geological processes. What’s more, those organisms were likely the ancestors of demosponges, which to this day have retained the ability to produce the same series of compounds.

“It’s a combination of what’s in the rock, what’s in the sponge, and what you can make in a chemistry laboratory,” Summons says. “You’ve got three supportive, mutually agreeing lines of evidence, pointing to these sponges being among the earliest animals on Earth.”

“In this study we show how to authenticate a biomarker, verifying that a signal truly comes from life rather than contamination or non-biological chemistry,” Shawar adds.

Now that the team has shown C30 and C31 sterols are reliable signals of ancient sponges, they plan to look for the chemical fossils in ancient rocks from other regions of the world. They can only tell from the rocks they’ve sampled so far that the sediments, and the sponges, formed some time during the Ediacaran Period. With more samples, they will have a chance to narrow in on when some of the first animals took form.

This research was supported, in part, by the MIT Crosby Fund, the Distinguished Postdoctoral Fellowship program, the Simons Foundation Collaboration on the Origins of Life, and the NASA Exobiology Program. 


How the brain splits up vision without you even noticing

As an object moves across your field of view, the brain seamlessly hands off visual processing from one hemisphere to the other like cell phone towers or relay racers do, a new MIT study shows.


The brain divides vision between its two hemispheres — what’s on your left is processed by your right hemisphere, and vice versa — but your experience with every bike or bird that you see zipping by is seamless. A new study by neuroscientists at The Picower Institute for Learning and Memory at MIT reveals how the brain handles the transition.

“It’s surprising to some people to hear that there’s some independence between the hemispheres, because that doesn’t really correspond to how we perceive reality,” says Earl K. Miller, Picower Professor in the Picower Institute and MIT’s Department of Brain and Cognitive Sciences. “In our consciousness, everything seems to be unified.”

There are advantages to separately processing vision on either side of the brain, including the ability to keep track of more things at once, Miller and other researchers have found, but neuroscientists have been eager to fully understand how perception ultimately appears so unified in the end.

Led by Picower Fellow Matthew Broschard and Research Scientist Jefferson Roy, the research team measured neural activity in the brains of animals as they tracked objects crossing their field of view. The results reveal that different frequencies of brain waves encoded and then transferred information from one hemisphere to the other in advance of the crossing, and then held on to the object representation in both hemispheres until after the crossing was complete. The process is analogous to how relay racers hand off a baton, how a child swings from one monkey bar to the next, and how cellphone towers hand off a call from one to the next as a train passenger travels through their area. In all cases, both towers or hands actively hold what’s being transferred until the handoff is confirmed.

Witnessing the handoff

To conduct the study, published Sept. 19 in the Journal of Neuroscience, the researchers measured both the electrical spiking of individual neurons and the various frequencies of brain waves that emerge from the coordinated activity of many neurons. They studied the dorsal and ventrolateral prefrontal cortex in both hemispheres, brain areas associated with executive brain functions.

The power fluctuations of the wave frequencies in each hemisphere told the researchers a clear story about how the subject’s brains transferred information from the “sending” to the “receiving” hemisphere whenever a target object crossed the middle of their field of view. In the experiments, the target was accompanied by a distractor object on the opposite side of the screen to confirm that the subjects were consciously paying attention to the target object’s motion, and not just indiscriminately glancing at whatever happened to pop up on to the screen.

The highest-frequency “gamma” waves, which encode sensory information, peaked in both hemispheres when the subjects first looked at the screen and again when the two objects appeared. When a color change signaled which object was the target to track, the gamma increase was only evident in the “sending” hemisphere (on the opposite side as the target object), as expected. Meanwhile, the power of somewhat lower-frequency “beta” waves, which regulate when gamma waves are active, varied inversely with the gamma waves. These sensory encoding dynamics were stronger in the ventrolateral locations compared to the dorsolateral ones.

Meanwhile, two distinct bands of lower-frequency waves showed greater power in the dorsolateral locations at key moments related to achieving the handoff. About a quarter of a second before a target object crossed the middle of the field of view, “alpha” waves ramped up in both hemispheres and then peaked just after the object crossed. Meanwhile, “theta” band waves peaked after the crossing was complete, only in the “receiving” hemisphere (opposite from the target’s new position).

Accompanying the pattern of wave peaks, neuron spiking data showed how the brain’s representation of the target’s location traveled. Using decoder software, which interprets what information the spikes represent, the researchers could see the target representation emerge in the sending hemisphere’s ventrolateral location when it was first cued by the color change. Then they could see that as the target neared the middle of the field of view, the receiving hemisphere joined the sending hemisphere in representing the object, so that they both encoded the information during the transfer.

Doing the wave

Taken together, the results showed that after the sending hemisphere initially encoded the target with a ventrolateral interplay of beta and gamma waves, a dorsolateral ramp up of alpha waves caused the receiving hemisphere to anticipate the handoff by mirroring the sending hemisphere’s encoding of the target information. Alpha peaked just after the target crossed the middle of the field of view, and when the handoff was complete, theta peaked in the receiving hemisphere as if to say, “I got it.”

And in trials where the target never crossed the middle of the field of view, these handoff dynamics were not apparent in the measurements.

The study shows that the brain is not simply tracking objects in one hemisphere and then just picking them up anew when they enter the field of view of the other hemisphere.

“These results suggest there are active mechanisms that transfer information between cerebral hemispheres,” the authors wrote. “The brain seems to anticipate the transfer and acknowledge its completion.”

But they also note, based on other studies, that the system of interhemispheric coordination can sometimes appear to break down in certain neurological conditions including schizophrenia, autism, depression, dyslexia, and multiple sclerosis. The new study may lend insight into the specific dynamics needed for it to succeed.

In addition to Broschard, Roy, and Miller, the paper’s other authors are Scott Brincat and Meredith Mahnke.

Funding for the study came from the Office of Naval Research, the National Eye Institute of the National Institutes of Health, The Freedom Together Foundation, and The Picower Institute for Learning and Memory.


MIT engineers develop a magnetic transistor for more energy-efficient electronics

A new device concept opens the door to compact, high-performance transistors with built-in memory.


Transistors, the building blocks of modern electronics, are typically made of silicon. Because it’s a semiconductor, this material can control the flow of electricity in a circuit. But silicon has fundamental physical limits that restrict how compact and energy-efficient a transistor can be.

MIT researchers have now replaced silicon with a magnetic semiconductor, creating a magnetic transistor that could enable smaller, faster, and more energy-efficient circuits. The material’s magnetism strongly influences its electronic behavior, leading to more efficient control of the flow of electricity. 

The team used a novel magnetic material and an optimization process that reduces the material’s defects, which boosts the transistor’s performance.

The material’s unique magnetic properties also allow for transistors with built-in memory, which would simplify circuit design and unlock new applications for high-performance electronics.

“People have known about magnets for thousands of years, but there are very limited ways to incorporate magnetism into electronics. We have shown a new way to efficiently utilize magnetism that opens up a lot of possibilities for future applications and research,” says Chung-Tao Chou, an MIT graduate student in the departments of Electrical Engineering and Computer Science (EECS) and Physics, and co-lead author of a paper on this advance.

Chou is joined on the paper by co-lead author Eugene Park, a graduate student in the Department of Materials Science and Engineering (DMSE); Julian Klein, a DMSE research scientist; Josep Ingla-Aynes, a postdoc in the MIT Plasma Science and Fusion Center; Jagadeesh S. Moodera, a senior research scientist in the Department of Physics; and senior authors Frances Ross, TDK Professor in DMSE; and Luqiao Liu, an associate professor in EECS, and a member of the Research Laboratory of Electronics; as well as others at the University of Chemistry and Technology in Prague. The paper appears today in Physical Review Letters.

Overcoming the limits

In an electronic device, silicon semiconductor transistors act like tiny light switches that turn a circuit on and off, or amplify weak signals in a communication system. They do this using a small input voltage.

But a fundamental physical limit of silicon semiconductors prevents a transistor from operating below a certain voltage, which hinders its energy efficiency.

To make more efficient electronics, researchers have spent decades working toward magnetic transistors that utilize electron spin to control the flow of electricity. Electron spin is a fundamental property that enables electrons to behave like tiny magnets.

So far, scientists have mostly been limited to using certain magnetic materials. These lack the favorable electronic properties of semiconductors, constraining device performance.

“In this work, we combine magnetism and semiconductor physics to realize useful spintronic devices,” Liu says.

The researchers replace the silicon in the surface layer of a transistor with chromium sulfur bromide, a two-dimensional material that acts as a magnetic semiconductor.

Due to the material’s structure, researchers can switch between two magnetic states very cleanly. This makes it ideal for use in a transistor that smoothly switches between “on” and “off.”

“One of the biggest challenges we faced was finding the right material. We tried many other materials that didn’t work,” Chou says.

They discovered that changing these magnetic states modifies the material’s electronic properties, enabling low-energy operation. And unlike many other 2D materials, chromium sulfur bromide remains stable in air.

To make a transistor, the researchers pattern electrodes onto a silicon substrate, then carefully align and transfer the 2D material on top. They use tape to pick up a tiny piece of material, only a few tens of nanometers thick, and place it onto the substrate.

“A lot of researchers will use solvents or glue to do the transfer, but transistors require a very clean surface. We eliminate all those risks by simplifying this step,” Chou says.

Leveraging magnetism

This lack of contamination enables their device to outperform existing magnetic transistors. Most others can only create a weak magnetic effect, changing the flow of current by a few percent or less. Their new transistor can switch or amplify the electric current by a factor of 10.

They use an external magnetic field to change the magnetic state of the material, switching the transistor using significantly less energy than would usually be required.

The material also allows them to control the magnetic states with electric current. This is important because engineers cannot apply magnetic fields to individual transistors in an electronic device. They need to control each one electrically.

The material’s magnetic properties could also enable transistors with built-in memory, simplifying the design of logic or memory circuits.

A typical memory device has a magnetic cell to store information and a transistor to read it out. Their method can combine both into one magnetic transistor.

“Now, not only are transistors turning on and off, they are also remembering information. And because we can switch the transistor with greater magnitude, the signal is much stronger so we can read out the information faster, and in a much more reliable way,” Liu says.

Building on this demonstration, the researchers plan to further study the use of electrical current to control the device. They are also working to make their method scalable so they can fabricate arrays of transistors.

This research was supported, in part, by the Semiconductor Research Corporation, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. National Science Foundation (NSF), the U.S. Department of Energy, the U.S. Army Research Office, and the Czech Ministry of Education, Youth, and Sports. The work was partially carried out at the MIT.nano facilities.


What does the future hold for generative AI?

At the inaugural MIT Generative AI Impact Consortium Symposium, researchers and business leaders discussed potential advancements centered on this powerful technology.


When OpenAI introduced ChatGPT to the world in 2022, it brought generative artificial intelligence into the mainstream and started a snowball effect that led to its rapid integration into industry, scientific research, health care, and the everyday lives of people who use the technology.

What comes next for this powerful but imperfect tool?

With that question in mind, hundreds of researchers, business leaders, educators, and students gathered at MIT’s Kresge Auditorium for the inaugural MIT Generative AI Impact Consortium (MGAIC) Symposium on Sept. 17 to share insights and discuss the potential future of generative AI.

“This is a pivotal moment — generative AI is moving fast. It is our job to make sure that, as the technology keeps advancing, our collective wisdom keeps pace,” said MIT Provost Anantha Chandrakasan to kick off this first symposium of the MGAIC, a consortium of industry leaders and MIT researchers launched in February to harness the power of generative AI for the good of society.

Underscoring the critical need for this collaborative effort, MIT President Sally Kornbluth said that the world is counting on faculty, researchers, and business leaders like those in MGAIC to tackle the technological and ethical challenges of generative AI as the technology advances.

“Part of MIT’s responsibility is to keep these advances coming for the world. … How can we manage the magic [of generative AI] so that all of us can confidently rely on it for critical applications in the real world?” Kornbluth said.

To keynote speaker Yann LeCun, chief AI scientist at Meta, the most exciting and significant advances in generative AI will most likely not come from continued improvements or expansions of large language models like Llama, GPT, and Claude. Through training, these enormous generative models learn patterns in huge datasets to produce new outputs.

Instead, LuCun and others are working on the development of “world models” that learn the same way an infant does — by seeing and interacting with the world around them through sensory input.

“A 4-year-old has seen as much data through vision as the largest LLM. … The world model is going to become the key component of future AI systems,” he said.

A robot with this type of world model could learn to complete a new task on its own with no training. LeCun sees world models as the best approach for companies to make robots smart enough to be generally useful in the real world.

But even if future generative AI systems do get smarter and more human-like through the incorporation of world models, LeCun doesn’t worry about robots escaping from human control.

Scientists and engineers will need to design guardrails to keep future AI systems on track, but as a society, we have already been doing this for millennia by designing rules to align human behavior with the common good, he said.

“We are going to have to design these guardrails, but by construction, the system will not be able to escape those guardrails,” LeCun said.

Keynote speaker Tye Brady, chief technologist at Amazon Robotics, also discussed how generative AI could impact the future of robotics.

For instance, Amazon has already incorporated generative AI technology into many of its warehouses to optimize how robots travel and move material to streamline order processing.

He expects many future innovations will focus on the use of generative AI in collaborative robotics by building machines that allow humans to become more efficient.

“GenAI is probably the most impactful technology I have witnessed throughout my whole robotics career,” he said.

Other presenters and panelists discussed the impacts of generative AI in businesses, from largescale enterprises like Coca-Cola and Analog Devices to startups like health care AI company Abridge.

Several MIT faculty members also spoke about their latest research projects, including the use of AI to reduce noise in ecological image data, designing new AI systems that mitigate bias and hallucinations, and enabling LLMs to learn more about the visual world.

After a day spent exploring new generative AI technology and discussing its implications for the future, MGAIC faculty co-lead Vivek Farias, the Patrick J. McGovern Professor at MIT Sloan School of Management, said he hoped attendees left with “a sense of possibility, and urgency to make that possibility real.”


Could a primordial black hole’s last burst explain a mysteriously energetic neutrino?

If a new proposal by MIT physicists bears out, the recent detection of a record-setting neutrino could be the first evidence of elusive Hawking radiation.


The last gasp of a primordial black hole may be the source of the highest-energy “ghost particle” detected to date, a new MIT study proposes.

In a paper appearing today in Physical Review Letters, MIT physicists put forth a strong theoretical case that a recently observed, highly energetic neutrino may have been the product of a primordial black hole exploding outside our solar system.

Neutrinos are sometimes referred to as ghost particles, for their invisible yet pervasive nature: They are the most abundant particle type in the universe, yet they leave barely a trace. Scientists recently identified signs of a neutrino with the highest energy ever recorded, but the source of such an unusually powerful particle has yet to be confirmed.

The MIT researchers propose that the mysterious neutrino may have come from the inevitable explosion of a primordial black hole. Primordial black holes (PBHs) are hypothetical black holes that are microscopic versions of the much more massive black holes that lie at the center of most galaxies. PBHs are theorized to have formed in the first moments following the Big Bang. Some scientists believe that primordial black holes could constitute most or all of the dark matter in the universe today.

Like their more massive counterparts, PBHs should leak energy and shrink over their lifetimes, in a process known as Hawking radiation, which was predicted by the physicist Stephen Hawking. The more a black hole radiates, the hotter it gets and the more high-energy particles it releases. This is a runaway process that should produce an incredibly violent explosion of the most energetic particles just before a black hole evaporates away.

The MIT physicists calculate that, if PBHs make up most of the dark matter in the universe, then a small subpopulation of them would be undergoing their final explosions today throughout the Milky Way galaxy. And, there should be a statistically significant possibility that such an explosion could have occurred relatively close to our solar system. The explosion would have released a burst of high-energy particles, including neutrinos, one of which could have had a good chance of hitting a detector on Earth.

If such a scenario had indeed occurred, the recent detection of the highest-energy neutrino would represent the first observation of Hawking radiation, which has long been assumed, but has never been directly observed from any black hole. What’s more, the event might indicate that primordial black holes exist and that they make up most of dark matter — a mysterious substance that comprises 85 percent of the total matter in the universe, the nature of which remains unknown.

“It turns out there’s this scenario where everything seems to line up, and not only can we show that most of the dark matter [in this scenario] is made of primordial black holes, but we can also produce these high-energy neutrinos from a fluke nearby PBH explosion,” says study lead author Alexandra Klipfel, a graduate student in MIT’s Department of Physics. “It’s something we can now try to look for and confirm with various experiments.”

The study’s other co-author is David Kaiser, professor of physics and the Germeshausen Professor of the History of Science at MIT.

High-energy tension

In February, scientists at the Cubic Kilometer Neutrino Telescope, or KM3NeT, reported the detection of the highest-energy neutrino recorded to date. KM3NeT is a large-scale underwater neutrino detector located at the bottom of the Mediterranean Sea, where the environment is meant to mute the effects of any particles other than neutrinos.

The scientists operating the detector picked up signatures of a passing neutrino with an energy of over 100 peta-electron-volts. One peta-electron volt is equivalent to the energy of 1 quadrillion electron volts.

“This is an incredibly high energy, far beyond anything humans are capable of accelerating particles up to,” Klipfel says. “There’s not much consensus on the origin of such high-energy particles.”

Similarly high-energy neutrinos, though not as high as what KM3NeT observed, have been detected by the IceCube Observatory — a neutrino detector embedded deep in the ice at the South Pole. IceCube has detected about half a dozen such neutrinos, whose unusually high energies have also eluded explanation. Whatever their source, the IceCube observations enable scientists to work out a plausible rate at which neutrinos of those energies typically hit Earth. If this rate were correct, however, it would be extremely unlikely to have seen the ultra-high-energy neutrino that KM3NeT recently detected. The two detectors’ discoveries, then, seemed to be what scientists call “in tension.”

Kaiser and Klipfel, who had been working on a separate project involving primordial black holes, wondered: Could a PBH have produced both the KM3NeT neutrino and the handful of IceCube neutrinos, under conditions in which PBHs comprise most of the dark matter in the galaxy? If they could show a chance existed, it would raise an even more exciting possibility — that both observatories observed not only high-energy neutrinos but also the remnants of Hawking radiation.

“Our best chance”

The first step the scientists took in their theoretical analysis was to calculate how many particles would be emitted by an exploding black hole. All black holes should slowly radiate over time. The larger a black hole, the colder it is, and the lower-energy particles it emits as it slowly evaporates. Thus, any particles that are emitted as Hawking radiation from heavy stellar-mass black holes would be near impossible to detect. By the same token, however, much smaller primordial black holes would be very hot and emit high-energy particles in a process that accelerates the closer the black hole gets to disappearing entirely.

“We don’t have any hope of detecting Hawking radiation from astrophysical black holes,” Klipfel says. “So if we ever want to see it, the smallest primordial black holes are our best chance.”

The researchers calculated the number and energies of particles that a black hole should emit, given its temperature and shrinking mass. In its final nanosecond, they estimate that once a black hole is smaller than an atom, it should emit a final burst of particles, including about 1020 neutrinos, or about a sextillion of the particles, with energies of about 100 peta-electron-volts (around the energy that KM3NeT observed).

They used this result to calculate the number of PBH explosions that would have to occur in a galaxy in order to explain the reported IceCube results. They found that, in our region of the Milky Way galaxy, about 1,000 primordial black holes should be exploding per cubic parsec per year. (A parsec is a unit of distance equal to about 3 light years, which is more than 10 trillion kilometers.)

They then calculated the distance at which one such explosion in the Milky Way could have occurred, such that just a handful of the high-energy neutrinos could have reached Earth and produced the recent KM3NeT event. They find that a PBH would have to explode relatively close to our solar system — at a distance about 2,000 times further than the distance between the Earth and our sun.

The particles emitted from such a nearby explosion would radiate in all directions. However, the team found there is a small, 8 percent chance that an explosion can happen close enough to the solar system, once every 14 years, such that enough ultra-high-energy neutrinos hit the Earth.

“An 8 percent chance is not terribly high, but it’s well within the range for which we should take such chances seriously — all the more so because so far, no other explanation has been found that can account for both the unexplained very-high-energy neutrinos and the even more surprising ultra-high-energy neutrino event,” Kaiser says.

The team’s scenario seems to hold up, at least in theory. To confirm their idea will require many more detections of particles, including neutrinos at “insanely high energies.” Then, scientists can build up better statistics regarding such rare events.

“In that case, we could use all of our combined experience and instrumentation, to try to measure still-hypothetical Hawking radiation,” Kaiser says. “That would provide the first-of-its-kind evidence for one of the pillars of our understanding of black holes — and could account for these otherwise anomalous high-energy neutrino events as well. That’s a very exciting prospect!”

In tandem, other efforts to detect nearby PBHs could further bolster the hypothesis that these unusual objects make up most or all of the dark matter.

This work was supported, in part, by the National Science Foundation, MIT’s Center for Theoretical Physics – A Leinweber Institute, and the U.S. Department of Energy.