At MIT, a strong spirit of mentorship shapes how students learn, collaborate, and imagine the future. In a time of accelerating change — from breakthroughs in artificial intelligence to the evolving realities of global research and work — guidance for technical challenges and personal growth is more important than ever.
The Committed to Caring (C2C) program recognizes the outstanding professors who extend this dedication beyond the classroom, nurturing resilience, curiosity, and compassion in a new generation of innovators. The latest cohort of C2C honorees exemplify these values, demonstrating the lasting impact that faculty can have on students’ academic and personal journeys.
The Committed to Caring program is a student-driven initiative that has celebrated exceptional mentorship since 2014. In this cycle, 18 MIT professors have been selected as recipients of the C2C award for 2025-27, joining the ranks of nearly 100 previous honorees.
The following faculty members comprise the 2025-27 Committed to Caring cohort:
Since its launch, the C2C program has placed students at the heart of its nomination process. Graduate students across all departments are invited to share letters recognizing faculty whose mentorship has made a lasting impact on their academic and personal journeys. A selection committee, consisting of both graduate students and staff, reviews nominations to identify those who have meaningfully strengthened the graduate community at MIT.
The selection committee this year included: Zoë Wright (Office of Graduate Education, or OGE), Ryan Rideau, Elizabeth Guttenberg (OGE), Beth Marois (OGE), Sharikka Finley-Moise (OGE), Indrani Saha (History, Theory, and Criticism of Art and Architecture, OGE), Chen Liang (graduate student, MIT Sloan School of Management), Jasmine Aloor (grad student, Department of Aeronautics and Astronautics), Leila Hudson (grad student, Department of Electrical Engineering and Computer Science), and Chair Suraiya Baluch (OGE).
“I wanted to be part of this committee after nominating my own professor in the last cycle, and the experience has been incredibly meaningful,” says Aloor. “I was continually amazed by the ways that so many professors show deep care for their students behind the scenes … What stood out to me most was the breadth of ways these faculty members support their students, check in on them, provide mentorship, and cultivate lifelong bonds, despite being successful and pressed for time as leaders at the top Institute in the world.”
Guttenberg agrees, saying, “Even when these gestures appear simple, they leave a profound and lasting impact on students’ lives and help cultivate the thriving academic community we value.”
Nomination letters illustrate how the efforts of these MIT faculty reflect a deep and enduring commitment to their students’ growth, well-being, and sense of purpose. Their advisees praise these educators for their consistent impact beyond lectures and labs, and for fostering inclusion, support, and genuine connection. Their care and guidance cultivates spaces where students are encouraged not only to excel academically, but also to develop confidence, balance, and a clearer vision of their goals.
Liang underlined that the selection experience “has shown me how many faculty at MIT … help students grow into thoughtful, independent researchers and, just as importantly, into fuller versions of themselves in the world.”
In the months ahead, a series of articles will showcase the honorees in pairs, with a reception this April to recognize their lasting impact. By highlighting these faculty, the Committed to Caring program continues to celebrate and strengthen MIT’s culture of mentorship, respect, and collaboration.
Celebrating worm scienceTime and again, an unassuming roundworm has illuminated aspects of biology with major consequences for human health.For decades, scientists with big questions about biology have found answers in a tiny worm. That worm — a millimeter-long creature called Caenorhabditis elegans — has helped researchers uncover fundamental features of how cells and organisms work. The impact of that work is enormous: Discoveries made using C. elegans have been recognized with four Nobel Prizes and have led to the development of new treatments for human disease.
In a perspective piece published in the November 2025 issue of the journal PNAS, 11 biologists including Robert Horvitz, the David H. Koch (1962) Professor of Biology at MIT, celebrate Nobel Prize-winning advances made through research in C. elegans. The authors discuss how that work has led to advances for human health, and highlight how a uniquely collaborative community among worm researchers has fueled the field.
MIT scientists are well represented in that community: The prominent worm biologists who coauthored the PNAS paper include former MIT graduate students Andrew Fire PhD ’83 and Paul Sternberg PhD ’84, now at Stanford University and Caltech, respectively; and two past members of Horvitz’s lab, Victor Ambros ’75, PhD ’79, who is now at the University of Massachusetts Medical School, and former postdoc Gary Ruvkun of Massachusetts General Hospital. Ann Rougvie at the University of Minnesota is the paper’s corresponding author.
“This tiny worm is beautiful — elegant both in its appearance and in its many contributions to our understanding of the biological universe in which we live,” says Horvitz, who in 2002 was awarded the Nobel Prize in Physiology or Medicine, along with colleagues Sydney Brenner and John Sulston, for discoveries that helped explain how genes regulate programmed cell death and organ development.
Early worm discoveries
Those discoveries were among the early successes in C. elegans research, made by pioneering scientists who recognized the power of the microscopic roundworm. C. elegans offers many advantages for researchers: The worms are easy to grow and maintain in labs; their transparent bodies make cells and internal processes readily visible under a microscope; they are cellularly very simple (e.g., they have only 302 nerve cells, compared with about 100 billion in a human) and their genomes can be readily manipulated to study gene function.
Most importantly, many of the molecules and processes that operate in C. elegans have been retained throughout evolution, meaning discoveries made using the worm can have direct relevance to other organisms, including humans.
“Many aspects of biology are ancient and evolutionarily conserved,” Horvitz, who is also a member of MIT’s McGovern Institute for Brain Research and Koch Institute for Integrative Cancer Research, as well as an investigator at the Howard Hughes Medical Institute. “Such shared mechanisms can be most readily revealed by analyzing organisms that are highly tractable in the laboratory.”
In the 1960s, Brenner, a molecular biologist who was curious about how animals’ nervous systems develop and function, recognized that C. elegans offered unique opportunities to study these processes. Once he began developing the worm into a model for laboratory studies, it did not take long for other biologists to join him to take advantage of the new system.
In the 1970s, the unique features of the worm allowed Sulston to track the transformation of a fertilized egg into an adult animal, tracing the origins of each of the adult worm’s 959 cells. His studies revealed that in every developing worm, cells divide and mature in predictable ways. He also learned that some of the cells created during development do not survive into adulthood, and are instead eliminated by a process termed programmed cell death.
By seeking mutations that perturbed the process of programmed cell death, Horvitz and his colleagues identified key regulators of that process, which is sometimes referred to as apoptosis. These regulators, which both promote and oppose apoptosis, turned out to be vital for programmed cell death across the animal kingdom.
In humans, apoptosis shapes developing organs, refines brain circuits, and optimizes other tissue structures. It also modulates our immune systems and eliminates cells that are in danger of becoming cancerous. The human version of CED-9, the anti-apoptotic regulator that Horvitz’s team discovered in worms, is BCL-2. Researchers have shown that activating apoptotic cell death by blocking BCL-2 is an effective treatment for certain blood cancers. Today, researchers are also exploring new ways of treating immune disorders and neurodegenerative disease by manipulating apoptosis pathways.
Collaborative worm community
Horvitz and his colleagues’ discoveries about apoptosis helped demonstrate that understanding C. elegans biology has direct relevance to human biology and disease. Since then, a vibrant and closely connected community of worm biologists — including many who trained in Horvitz’s lab — has continued to carry out impactful work. In their PNAS article, Horvitz and his coauthors highlight that early work, as well as the Nobel Prize-winning work of:
Horvitz and his coauthors stress that while the worm itself made these discoveries possible, so too did a host of resources that facilitate collaboration within the worm community and enable its scientists to build upon the work of others. Scientists who study C. elegans have embraced this open, collaborative spirit since the field’s earliest days, Horvitz says, citing the Worm Breeder’s Gazette, an early newsletter where scientists shared their observations, methods, and ideas.
Today, scientists who study C. elegans — whether the organism is the centerpiece of their lab or they are looking to supplement studies of other systems — contribute to and rely on online resources like WormAtlas and WormBase, as well as the Caenorhabditis Genetics Center, to share data and genetic tools. Horvitz says these resources have been crucial to his own lab’s work; his team uses them every day.
Just as molecules and processes discovered in C. elegans have pointed researchers toward important pathways in human cells, the worm has also been a vital proving ground for developing methods and approaches later deployed to study more complex organisms. For example, C. elegans, with its 302 neurons, was the first animal for which neuroscientists successfully mapped all of the connections of the nervous system. The resulting wiring diagram, or connectome, has guided countless experiments exploring how neurons work together to process information and control behavior. Informed by both the power and limitations of the C. elegans’ connectome, scientists are now mapping more complex circuitry, such as the 139,000-neuron brain of the fruit fly, whose connectome was completed in 2024.
C. elegans remains a mainstay of biological research, including in neuroscience. Scientists worldwide are using the worm to explore new questions about neural circuits, neurodegeneration, development, and disease. Horvitz’s lab continues to turn to C. elegans to investigate the genes that control animal development and behavior. His team is now using the worm to explore how animals develop a sense of time and transmit that information to their offspring.
Also at MIT, Steven Flavell’s team in the Department of Brain and Cognitive Sciences and The Picower Institute for Learning and Memory is using the worm to investigate how neural connectivity, activity, and modulation integrate internal states, such as hunger, with sensory information, such as the smell of food, to produce sometimes long-lasting behaviors. (Flavell is Horvitz’s academic grandson, as Flavell trained with one of Horvitz’s postdoctoral trainees.)
As new technologies accelerate the pace of scientific discovery, Horvitz and his colleagues are confident that the humble worm will bring more unexpected insights.
New research may help scientists predict when a humid heat wave will breakAs these events become more common at midlatitudes, a phenomenon called an atmospheric inversion will determine how long they last.A long stretch of humid heat followed by intense thunderstorms is a weather pattern historically seen mostly in and around the tropics. But climate change is making humid heat waves and extreme storms more common in traditionally temperate midlatitude regions such as the midwestern U.S., which has seen episodes of unusually high heat and humidity in recent summers.
Now, MIT scientists have identified a key condition in the atmosphere that determines how hot and humid a midlatitude region can get, and how intense related storms can become. The results may help climate scientists gauge a region’s risk for humid heat waves and extreme storms as the world continues to warm.
In a study appearing this week in the journal Science Advances, the MIT team reports that a region’s maximum humid heat and storm intensity are limited by the strength of an “atmospheric inversion”— a weather condition in which a layer of warm air settles over cooler air.
Inversions are known to act as an atmospheric blanket that traps pollutants at ground level. Now, the MIT researchers have found atmospheric inversions also trap and build up heat and moisture at the surface, particularly in midlatitude regions. The more persistent an inversion, the more heat and humidity a region can accumulate at the surface, which can lead to more oppressive, longer-lasting humid heat waves.
And, when an inversion eventually weakens, the accumulated heat energy is released as convection, which can whip up the hot and humid air into intense thunderstorms and heavy rainfall.
The team says this effect is especially relevant for midlatitude regions, where atmospheric inversions are common. In the U.S., regions to the east of the Rocky Mountains often experience inversions of this kind, with relatively warm air aloft sitting over cooler air near the surface.
As climate change further warms the atmosphere in general, the team suspects that inversions may become more persistent and harder to break. This could mean more frequent humid heat waves and more intense storms for places that are not accustomed to such extreme weather.
“Our analysis shows that the eastern and midwestern regions of U.S. and the eastern Asian regions may be new hotspots for humid heat in the future climate,” says study author Funing Li, a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS).
“As the climate warms, theoretically the atmosphere will be able to hold more moisture,” adds co-author and EAPS Assistant Professor Talia Tamarin-Brodsky. “Which is why new regions in the midlatitudes could experience moist heat waves that will cause stress that they weren’t used to before.”
Air energetics
The atmosphere’s layers generally get colder with altitude. In these typical conditions, when a heat wave comes through a region, it warms the air at ground level. Since warm air is lighter than cold air, it will eventually rise, like a hot air balloon, prompting colder air to sink. This rise and fall of air sets off convection, like bubbles in boiling water. When warm air hits colder altitudes, it condenses into droplets that rain out, typically as a thunderstorm, that can often relieve a heat wave.
For their new study, Li and Tamarin-Brodsky wondered: What would it take to get air at the surface to convect and ultimately end a heat wave? Put another way: What sets the limit to how hot a region can get before air begins to convect to eventually rain?
The team treated the question as a problem of energy. Heat is energy that can be thought of in two forms: the energy that comes from dry heat (i.e., temperature), and the energy that comes from latent, or moist, heat. The scientists reasoned that, for a given portion or “parcel” of air, there is some amount of moisture that, when condensed, contributes to that air parcel’s total energy. Depending on how much energy an air parcel has, it could start to convect, rise up, and eventually rain out.
“Imagine putting a balloon around a parcel of air and asking, will it stay in the same place, will it go up, or will it sink?” Tamarin-Brodsky says. “It’s not just about warm air that’s lifting. You also have to think about the moisture that’s there. So we consider the energetics of an air parcel while taking into account the moisture in that air. Then we can find the maximum ‘moist energy’ that can accumulate near the surface before the air becomes unstable and convects.”
Heat barrier
As they worked through their analysis, the researchers found that the maximum amount of moist energy, or the highest level of heat and humidity that the air can hold, is set by the presence and strength of an atmospheric inversion. In cases where atmospheric layers are inverted (when a layer of warm or light air settles over colder or heavier, ground-level air), the air has to accumulate more heat and moisture in order for an air parcel to build up enough energy to lift up and break through the inversion layer. The more persistent the inversion is, the hotter and more humid air must get before it can rise up and convect.
Their analysis suggests that an atmospheric inversion can increase a region’s capacity to hold heat and humidity. How high this heat and humidity can get depends on how stable the inversion is. If a blanket of warm air parks over a region without moving, it allows more humid heat to build up, versus if the blanket is quickly removed. When the air eventually convects, the accumulated heat and moisture will generate stronger, more intense storms.
“This increasing inversion has two effects: more severe humid heat waves, and less frequent but more extreme convective storms,” Tamarin-Brodsky says.
Inversions in the atmosphere form in various ways. At night, the surface that warmed during the day cools by radiating heat to space, making the air in contact with it cooler and denser than the air above. This creates a shallow layer in which temperature increases with height, called a nocturnal inversion. Inversions can also form when a shallow layer of cool marine air moves inland from the ocean and slides beneath warmer air over the land, leaving cool air near the surface and warmer air above. In some cases, persistent inversions can form when air heated over sun-warmed mountains is carried over colder low-lying regions, so that a warm layer aloft caps cooler air near the ground.
“The Great Plains and the Midwest have had many inversions historically due to the Rocky Mountains,” Li says. “The mountains act as an efficient elevated heat source, and westerly winds carry this relatively warm air downstream into the central and midwestern U.S., where it can help create a persistent temperature inversion that caps colder air near the surface.”
“In a future climate for the Midwest, they may experience both more severe thunderstorms and more extreme humid heat waves,” Tamarin-Brodsky says. “Our theory gives an understanding of the limit for humid heat and severe convection for these communities that will be future heat wave and thunderstorm hotspots.”
This research is part of the MIT Climate Grand Challenge on Weather and Climate Extremes. Support was provided by Schmidt Sciences.
MIT in the media: 2025 in review MIT community members made headlines with key research advances and their efforts to tackle pressing challenges.“At MIT, innovation ranges from awe-inspiring technology to down-to-Earth creativity,” noted Chronicle, during a campus visit this year for an episode of the program. In 2025, MIT researchers made headlines across print publications, podcasts, and video platforms for key scientific advances, from breakthroughs in quantum and artificial intelligence to new efforts aimed at improving pediatric health care and cancer diagnosis.
MIT faculty, researchers, students, alumni and staff helped demystify new technologies, highlighted the practical hands-on learning the Institute is known for, and shared what inspires their research with viewers, readers and listeners around the world. Below is a sampling of news moments to revisit.
Let’s take a closer look at MIT: It’s alarming to see such a complex, important institution subject to the whims of today’s politics
Washington Post columnist George F. Will reflects on MIT and his view of “the damage that can be done to America’s meritocracy by policies motivated by hostility toward institutions vital to it.” Will notes that MIT has an “astonishing economic multiplier effect: MIT graduates have founded companies that have generated almost $1.9 trillion in annual revenue (a sum almost equal to Russia’s GDP) and 4.6 million jobs.”
Full story via The Washington Post
At MIT, groundbreaking ideas blend science and breast cancer detection innovation
Chronicle visited MIT this spring to learn more about how the Institute “nurtures groundbreaking efforts, reminding us that creativity and science thrive together, inspiring future advancements in engineering, medicine, and beyond.”
Full story via Chronicle
New MIT provost looks to build more bridges with CEOs
Provost Anantha Chandrakasan shares his energy and enthusiasm for MIT, and his goals for the Institute.
Full story via The Boston Globe
Five things New England researchers helped develop with federal funding
Professors John Guttag and David Mindell discuss MIT’s long history of developing foundational technologies — including the internet and the first widely used electronic navigation system — with the support of federal funding.
Full story via The Boston Globe
Bostonians of the Year 2025: First responders, university presidents, and others who exemplified courage
President Sally Kornbluth is honored by The Boston Globe as one of the Bostonians of the Year, a list that spotlights individuals across the region who, in choosing the difficult path, “showed us what strength looks like.” Kornbluth was recognized for her work being of the “most prominent voices rallying to protect academic freedom.”
Full story via The Boston Globe
Practical education and workforce preparation
College students flock to a new major: AI
MIT’s new Artificial Intelligence and Decision Making major is aimed at teaching students to “develop AI systems and study how technologies like robots interact with humans and the environment.”
Full story via New York Times
50 colleges with the best ROI
MIT has been named among the top colleges in the country for return on investment. MIT “is need-blind and full-need for undergraduate students. Six out of 10 students receive financial aid, and almost 88% of the Class of 2025 graduated debt-free.”
Full story via Boston 25
Desirée Plata: Chemist, oceanographer, engineer, entrepreneur
Professor Desirée Plata explains that she is most proud of her work as an educator. “The faculty of the world are training the next generation of researchers,” says Plata. “We need a trained workforce. We need patient chemists who want to solve important problems.”
Full story via Chemical & Engineering News
Taking a quantum leap
MIT launches quantum initiative to tackle challenges in science, health care, national security
MIT is “taking a quantum leap with the launch of the new MIT Quantum Initiative (QMIT). “There isn't a more important technological field right now than quantum with its enormous potential for impact on both fundamental research and practical problems,” said President Sally Kornbluth.
Full story via State House News Service
Peter Shor on how quantum tech can help climate
Professor Peter Shor helps disentangle quantum technologies.
Full story via The Quantum Kid
MIT researchers develop device to enable direct communication between multiple quantum processors
MIT researchers made a key advance in the creation of a practical quantum computer.
Full story via Military & Aerospace Electronics
Fortifying national security and aiding disaster response
Nano-material breakthrough could revolutionize night vision
MIT researchers developed “a new way to make large ultrathin infrared sensors that don’t need cryogenic cooling and could radically change night vision for the military.”
Full story via Defense One
MIT researchers develop robot designed to help first-responders in disaster situations
Researchers at MIT engineered SPROUT (Soft Pathfinding Robotic Observation Unit), a robot aimed at assisting first-responders.
Full story via WHDH
MIT scientists make “smart” clothes that warn you when you’re sick
As part of an effort to help keep service members safe, MIT scientists created a programmable fiber that can be stitched into clothing to help monitor the wearer’s health.
Full story via FOX 28
MIT Lincoln Lab develops ocean-mapping technology
MIT Lincoln Laboratory researchers are developing “automated electric vessels to map the ocean floor and improve search and rescue missions.”
Full story via Chronicle
Transformative tech
This MIT scientist is rewiring robots to keep the humanity in tech
Professor Daniela Rus, director of the Computer Science and Artificial Intelligence Lab, discusses her work revolutionizing the field of robotics by bringing “empathy into engineering and proving that responsibility is as radical and as commercially attractive as unguarded innovation.”
Full story via Forbes
Watch this tiny robot somersault through the air like an insect
Professor Kevin Chen designed a tiny, insect-sized aerial microrobot.
Full story via Science
It's actually really hard to make a robot, guys
Professor Pulkit Agrawal delves into his work engineering a simulator that can be used to train robots.
Full story via NPR
Shape-shifting fabrics and programmable materials redefine design at MIT
Associate Professor Skylar Tibbits is embedding intelligence into the materials around us, while Professor Caitlin Mueller and Sandy Curth PhD ’25 are digging into eco-friendly construction.
Full story via Chronicle
Building a healthier future
MIT launches pediatric research hub to address access gaps
The Hood Pediatric Innovation Hub is addressing “underinvestment in pediatric healthcare innovations.”
Full story via Boston Business Journal
Bionic knee helps amputees walk naturally again
Professor Hugh Herr developed a prosthetic that could increase mobility for above-the-knee amputees. “The bionic knee developed by MIT doesn’t just restore function, it redefines it.”
Full story via Fox News
MIT drug hunters are using AI to design completely new antibiotics
Professor James Collins is using AI to develop new compounds to combat antibiotic resistance.
Full story via Fast Company
Innovative once-weekly capsule helps quell schizophrenia symptoms
A new pill from the lab of Associate Professor Giovanni Traverso “can greatly simplify the drug schedule faced by schizophrenia patients.”
Full story via Newsmax
Renewing American manufacturing
US manufacturing is in “pretty bad shape.” MIT hopes to change that.
MIT launched the Initiative for New Manufacturing to help “build the tools and talent to shape a more productive and sustainable future for manufacturing.”
Full story via Manufacturing Dive
Giving US manufacturing a boost
Ben Armstrong of the MIT Industrial Performance Center discusses how to reinvigorate manufacturing in America.
Full story via Marketplace
New England companies are sparking an industrial revolution. Here’s how to harness it.
Professor David Mindell spotlights how “a new wave of industrial companies, many in New England, are leveraging new technologies to create jobs and empower workers.”
Full story via The Boston Globe
Improving aging
My day as an 80-year-old. What an age-simulation suit taught me.
To get a better sense of the experience of aging, Wall Street Journal reporter Amy Dockser Marcus donned the MIT AgeLab’s age-simulation suit and embarked on multiple activities.
Full story via The Wall Street Journal
New mobile robot helps seniors walk safely and prevent falls
A mobile robot created by MIT engineers is designed to help prevent falls. “It's easy to see how something like this could make a big difference for seniors wanting to stay independent.”
Full story via Fox News
The senior population is booming. Caregiving is struggling to keep up
Professor Jonathan Gruber discusses the labor shortages impacting senior care.
Full story via CNBC
Upping our energy resilience
New MIT collaboration with GE Vernova aims to accelerate energy transition
“A great amount of innovation happens in academia. We have a longer view into the future,” says Provost Anantha Chandrakasan of the MIT-GE Vernova Energy and Climate Alliance.
Full story via The Boston Globe
The environmental impacts of generative AI
Noman Bashir, a fellow with MIT’s Climate and Sustainability Consortium, explores the environmental impacts of generative AI.
Full story via Fox 13
Is the clean energy economy doomed?
Professor Christopher Knittel discusses how the U.S. can be in the best position for global energy dominance.
Full story via Marketplace
Advancing American workers
WTH can we do to prevent a second China shock? Professor David Autor explains
Professor David Autor shares his research examining the long-term impact of China entering the World Trade Organization, how the U.S. can protect vital industries from unfair trade practices, and the potential impacts of AI on workers.
Full story via American Enterprise Institute
The fight over robots threatening American jobs
Professor Daron Acemoglu highlights the economic and societal implications of integrating automation in the workforce, advocating for policies aimed at assisting workers.
Full story via Financial Times
Moving toward automation
Research Scientist Eva Ponce of the MIT Center for Transportation and Logistics notes that robotics and AI technologies are “replacing some jobs — particularly more manual tasks including heavy lifting — but have also offered new opportunities within warehouse operations.”
Full story via Financial Times
Planetary defense and out-of-this world exploration
MIT researchers create new asteroid detection methods to help protect Earth
Associate Professor Julien de Wit and Research Scientist Artem Burdanov discuss their work developing a new method to track asteroids that could impact Earth.
Full story via WBZ Radio
What happens to the bodies of NASA astronauts returning to Earth?
Professor Dava Newman speaks about how long-duration stays in space can affect the human body.
Full story via News Nation
Lunar lander Athena is packed and ready to explore the moon. Here’s what on board
MIT engineers sent three payloads into space on a course set for the moon’s south polar region.
Full story via USA Today
Scanning the heavens at the Vatican Observatory
Br. Guy Consolmagno '74, SM '75, director of the Vatican Observatory, and graduate student Isabella Macias share their experiences studying astronomy and planetary formation at the Vatican Observatory. “The Vatican has such a deep, rich history of working with astronomers,” says Macias. “It shows that science is not only for global superpowers around the world, but it's for students, it's for humanity.”
Full story via CBS News Sunday Morning
The story of real-life rocket scientists
Professor Kerri Cahoy takes viewers on an out-of-this-world journey into how a college internship inspired her research on space and satellites.
Full story via Bloomberg Television
On the air
While digital currency initiatives expand, we ask: What’s the future of cash?
Neha Narula, director of the MIT Digital Currency Initiative, examines the future of cash as the use of digital currencies expands.
Full story via USA Today
The high stakes of the AI economy
Professor Asu Ozdaglar, head of the Department of Electrical Engineering and Computer Science and deputy dean of the MIT Schwarzman College of Computing, explores AI’s opportunities and risks — and whether it can be regulated without stifling progress.
Full story via Is Business Broken?
The LIGO Lab is pushing the boundaries of gravitational-wave research
Associate Professor Matt Evans explores the future of gravitational wave research and how Cosmic Explorer, the next-generation gravitational wave observatory, will help unearth secrets of the early universe.
Full story via Scientific American
Space junk: The impact of global warming on satellites
Graduate student Will Parker discusses his research examining the impact of climate change on satellites.
Full story via USA Today
Endometriosis is common. Why is getting diagnosed so hard?
Professor Linda Griffith shares her work studying endometriosis and her efforts to improve healthcare for women.
Full story via Science Friday
There’s nothing small about this nanoscale research
Professor Vladimir Bulović takes listeners on a tour of MIT.nano, MIT’s “clean laboratory facility that is critical to nanoscale research, from microelectronics to medical nanotechnology.”
Full story via Scientific American
Marrying science and athletics
The MIT scientist behind the “torpedo bats” that are blowing up baseball
Aaron Leanhardt PhD ’03 went from an MIT graduate student who was part of a research team that “cooled sodium gas to the lowest temperature ever recorded in human history” to inventor of the torpedo baseball bat, “perhaps the most significant development in bat technology in decades.”
Full story via The Wall Street Journal
Engineering athletes redefine routine
After suffering a concussion during her sophomore year, Emiko Pope ’25 was inspired to explore the effectiveness of concussion headbands.
Full story via American Society of Mechanical Engineers
“I missed talking math with people”: why John Urschel left the NFL for MIT
Assistant Professor John Urschel shares his decision to call an audible and leave his NFL career to focus on his love for math at MIT.
Full story via The Guardian
Making a statement, MIT’s football team dons extra head padding for safety
It’s a piece of equipment that may become more widely used as research continues into its effectiveness — including from at least one of the players on the current team.
Full story via GBH Morning Edition
Agricultural efficiency
New MIT breakthrough could save farmers billions on pesticides
MIT engineers developed a system that helps pesticides adhere more effectively to plant leaves, allowing farmers to use fewer chemicals.
Full story via Michigan Farm News
Bug-sized robots could help pollination on future farms
Insect-sized robots crafted by MIT researchers could one day be used to help with farming practices like artificial pollination.
Full story via Reuters
See how MIT researchers harvest water from the air
An ultrasonic device created by MIT engineers can extract clean drinking water from atmospheric moisture.
Full story via CNN
Appreciating art
Meet the engineer using deep learning to restore Renaissance art
Graduate student Alex Kachkine talks about his work applying AI to develop a restoration method for damaged artwork.
Full story via Nature
MIT’s Linde Music Building opens with a free festival
“The extent of art-making on the MIT campus is equal to that of a major city,” says Institute Professor Marcus Thompson. “It’s a miracle that it’s all right here, by people in science and technology who are absorbed in creating a new world and who also value the past, present and future of music and the arts.”
Full story via Cambridge Day
“Remembering the Future” on display at the MIT Museum
The “Remembering the Future” exhibit at the MIT Museum features a sculptural installation that uses “climate data from the last ice age to the present, as well as projected future environments, to create a geometric design.”
Full story via The New York Times
In 2025, MIT maintained its standard of community and research excellence amidst a shift in national priorities regarding the federal funding of higher education. Notably, QS ranked MIT No. 1 in the world for the 14th straight year, while U.S. News ranked MIT No. 2 in the nation for the 5th straight year.
This year, President Sally Kornbluth also added to the Institute’s slate of community-wide strategic initiatives, with new collaborative efforts focused on manufacturing, generative artificial intelligence, and quantum science and engineering. In addition, MIT opened several new buildings and spaces, hosted a campuswide art festival, and continued its tradition of bringing the latest in science and technology to the local community and to the world. Here are some of the top stories from around MIT over the past 12 months.
MIT collaboratives
President Kornbluth announced three new Institute-wide collaborative efforts designed to foster and support alliances that will take on global problems. The Initiative for New Manufacturing (INM) will work toward bolstering industry and creating jobs by driving innovation across vital manufacturing sectors. The MIT Generative AI Impact Consortium (MGAIC), a group of industry leaders and MIT researchers, aims to harness the power of generative artificial intelligence for the good of society. And the MIT Quantum Initiative (QMIT) will leverage quantum breakthroughs to drive the future of scientific and technological progress.
These missions join three announced last year — the Climate Project at MIT, the MIT Human Insight Collaborative (MITHIC), and the MIT Health and Life Sciences Collaborative (MIT HEALS).
Sharing the wonders of science and technology
This year saw the launch of MIT Learn, a dynamic AI-enabled website that hosts nearly 13,000 non-degree learning opportunities, making it easier for learners around the world to discover the courses and resources available on MIT’s various learning platforms.
The Institute also hosted the Cambridge Science Carnival, a hands-on event managed by the MIT Museum that drew approximately 20,000 attendees and featured more than 140 activities, demonstrations, and installations tied to the topics of science, technology, engineering, arts, and mathematics (STEAM).
Commencement
At Commencement, Hank Green urged MIT’s newest graduates to focus their work on the “everyday solvable problems of normal people,” even if it is not always the easiest or most obvious course of action. Green is a popular content creator and YouTuber whose work often focuses on science and STEAM issues, and who co-created the educational media company Complexly.
President Kornbluth challenged graduates to be “ambassadors” for the open-minded inquiry and collaborative work that marks everyday life at MIT.
Top accolades
In January, the White House bestowed national medals of science and technology — the country’s highest awards for scientists and engineers — on four MIT professors and an additional alumnus. Moderna, with deep MIT roots, was also recognized.
As in past years, MIT faculty, staff, and alumni were honored with election to the various national academies: the National Academy of Sciences, the National Academy of Engineering, the National Academy of Medicine, and the National Academy of Inventors.
Faculty member Carlo Ratti served as curator of the Venice Biennale’s 19th International Architecture Exhibition.
Members of MIT Video Productions won a New England Emmy Award for their short film on the art and science of hand-forged knives with master bladesmith Bob Kramer.
And at MIT, Dimitris Bertsimas, vice provost for open learning and a professor of operations research, won this year’s Killian Award, the Institute’s highest faculty honor.
New and refreshed spaces
In the heart of campus, the Edward and Joyce Linde Music Building became fully operational to start off the year. In celebration, the Institute hosted Artfinity, a vibrant multiweek exploration of art and ideas, with more than 80 free performing and visual arts events including a film festival, interactive augmented-reality art installations, a simulated lunar landing, and concerts by both student groups and internationally renowned musicians.
Over the summer, the “Outfinite” — the open space connecting Hockfield Court with Massachusetts Avenue — was officially named the L. Rafael Reif Innovation Corridor in honor of President Emeritus L. Rafael Reif, MIT’s 17th president.
And in October, the Undergraduate Advising Center’s bright new home opened in Building 11 along the Infinite Corridor, bringing a welcoming and functional destination for MIT undergraduate students within the Institute’s Main Group.
Student honors and awards
MIT undergraduates earned an impressive number of prestigious awards in 2025. Exceptional students were honored with Rhodes, Gates Cambridge, and Schwarzman scholarships, among others.
A number of MIT student-athletes also helped to secure their team’s first NCAA national championship in Institute history: Women’s track and field won both the indoor national championship and outdoor national championship, while women’s swimming and diving won the national title as well.
Also for the fifth year in a row, MIT students earned all five top spots at the Putnam Mathematical Competition.
Leadership transitions
Several senior administrative leaders took on new roles in 2025. Anantha Chandrakasan was named provost; Paula Hammond was named dean of the School of Engineering; Richard Locke was named dean of the MIT Sloan School of Management; Gaspare LoDuca was named vice president for information systems and technology and CIO; Evelyn Wang was named vice president for energy and climate; and David Darmofal was named vice chancellor for undergraduate and graduate education.
Additional new leadership transitions include: Ana Bakshi was named executive director of the Martin Trust Center for MIT Entrepreneurship; Fikile Brushett was named director of the David H. Koch School of Chemical Engineering Practice; Laurent Demanet was named co-director of the Center for Computational Science and Engineering; Rohit Karnik was named director of the Abdul Latif Jameel Water and Food Systems Lab; Usha Lee McFarling was named director of the Knight Science Journalism Program; C. Cem Tasan was named director of the Materials Research Laboratory; and Jessika Trancik was named director of the Sociotechnical Systems Research Center.
Remembering those we lost
Among MIT community members who died this year were David Baltimore, Juanita Battle, Harvey Kent Bowen, Stanley Fischer, Frederick Greene, Lee Grodzins, John Joannopoulos, Keith Johnson, Daniel Kleppner, Earle Lomon, Nuno Loureiro, Victor K. McElheny, David Schmittlein, Anthony Sinskey, Peter Temin, Barry Vercoe, Rainer Weiss, Alan Whitney, and Ioannis Yannas.
In case you missed it…
Additional top stories from around the Institute in 2025 include a description of the environmental and sustainability implications of generative AI tech and applications; the story of how an MIT professor introduced hundreds of thousands of students to neuroscience with his classic textbook; a look at how MIT entrepreneurs are using AI; a roundup of new books by MIT faculty and staff; the selection of an MIT alumnus as a NASA astronaut candidate; the signing of an MIT student-athlete by the Los Angles Dodgers; and behind the scenes with MIT students who cracked a longstanding egg dilemma.
MIT’s top research stories of 2025Concrete batteries, AI-developed antibiotics, the ozone’s recovery, and a more natural bionic knee were some of the most popular topics on MIT News.In 2025, MIT’s research community had another prolific year filled with exciting scientific and technological advances. To celebrate the achievements of the past 12 months, MIT News highlights some of our most-read stories from this year.
One of the biggest risk factors for developing liver cancer is a high-fat diet. A new study from MIT reveals how a fatty diet rewires liver cells and makes them more prone to becoming cancerous.
The researchers found that in response to a high-fat diet, mature hepatocytes in the liver revert to an immature, stem-cell-like state. This helps them to survive the stressful conditions created by the high-fat diet, but in the long term, it makes them more likely to become cancerous.
“If cells are forced to deal with a stressor, such as a high-fat diet, over and over again, they will do things that will help them survive, but at the risk of increased susceptibility to tumorigenesis,” says Alex K. Shalek, director of the Institute for Medical Engineering and Sciences (IMES), the J. W. Kieckhefer Professor in IMES and the Department of Chemistry, and a member of the Koch Institute for Integrative Cancer Research at MIT, the Ragon Institute of MGH, MIT, and Harvard, and the Broad Institute of MIT and Harvard.
The researchers also identified several transcription factors that appear to control this reversion, which they believe could make good targets for drugs to help prevent tumor development in high-risk patients.
Shalek; Ömer Yilmaz, an MIT associate professor of biology and a member of the Koch Institute; and Wolfram Goessling, co-director of the Harvard-MIT Program in Health Sciences and Technology, are the senior authors of the study, which appears today in Cell. MIT graduate student Constantine Tzouanas, former MIT postdoc Jessica Shay, and Massachusetts General Brigham postdoc Marc Sherman are the co-first authors of the paper.
Cell reversion
A high-fat diet can lead to inflammation and buildup of fat in the liver, a condition known as steatotic liver disease. This disease, which can also be caused by a wide variety of long-term metabolic stresses such as high alcohol consumption, may lead to liver cirrhosis, liver failure, and eventually cancer.
In the new study, the researchers wanted to figure out just what happens in cells of the liver when exposed to a high-fat diet — in particular, which genes get turned on or off as the liver responds to this long-term stress.
To do that, the researchers fed mice a high-fat diet and performed single-cell RNA-sequencing of their liver cells at key timepoints as liver disease progressed. This allowed them to monitor gene expression changes that occurred as the mice advanced through liver inflammation, to tissue scarring and eventually cancer.
In the early stages of this progression, the researchers found that the high-fat diet prompted hepatocytes, the most abundant cell type in the liver, to turn on genes that help them survive the stressful environment. These include genes that make them more resistant to apoptosis and more likely to proliferate.
At the same time, those cells began to turn off some of the genes that are critical for normal hepatocyte function, including metabolic enzymes and secreted proteins.
“This really looks like a trade-off, prioritizing what’s good for the individual cell to stay alive in a stressful environment, at the expense of what the collective tissue should be doing,” Tzouanas says.
Some of these changes happened right away, while others, including a decline in metabolic enzyme production, shifted more gradually over a longer period. Nearly all of the mice on a high-fat diet ended up developing liver cancer by the end of the study.
When cells are in a more immature state, it appears that they are more likely to become cancerous if a mutation occurs later on, the researchers say.
“These cells have already turned on the same genes that they’re going to need to become cancerous. They’ve already shifted away from the mature identity that would otherwise drag down their ability to proliferate,” Tzouanas says. “Once a cell picks up the wrong mutation, then it’s really off to the races and they’ve already gotten a head start on some of those hallmarks of cancer.”
The researchers also identified several genes that appear to orchestrate the changes that revert hepatocytes to an immature state. While this study was going on, a drug targeting one of these genes (thyroid hormone receptor) was approved to treat a severe form of steatotic liver disease called MASH fibrosis. And, a drug activating an enzyme that they identified (HMGCS2) is now in clinical trials to treat steatotic liver disease.
Another possible target that the new study revealed is a transcription factor called SOX4, which is normally only active during fetal development and in a small number of adult tissues (but not the liver).
Cancer progression
After the researchers identified these changes in mice, they sought to discover if something similar might be happening in human patients with liver disease. To do that, they analyzed data from liver tissue samples removed from patients at different stages of the disease. They also looked at tissue from people who had liver disease but had not yet developed cancer.
Those studies revealed a similar pattern to what the researchers had seen in mice: The expression of genes needed for normal liver function decreased over time, while genes associated with immature states went up. Additionally, the researchers found that they could accurately predict patients’ survival outcomes based on an analysis of their gene expression patterns.
“Patients who had higher expression of these pro-cell-survival genes that are turned on with high-fat diet survived for less time after tumors developed,” Tzouanas says. “And if a patient has lower expression of genes that support the functions that the liver normally performs, they also survive for less time.”
While the mice in this study developed cancer within a year or so, the researchers estimate that in humans, the process likely extends over a longer span, possibly around 20 years. That will vary between individuals depending on their diet and other risk factors such as alcohol consumption or viral infections, which can also promote liver cells’ reversion to an immature state.
The researchers now plan to investigate whether any of the changes that occur in response to a high-fat diet can be reversed by going back to a normal diet, or by taking weight-loss drugs such as GLP-1 agonists. They also hope to study whether any of the transcription factors they identified could make good targets for drugs that could help prevent diseased liver tissue from becoming cancerous.
“We now have all these new molecular targets and a better understanding of what is underlying the biology, which could give us new angles to improve outcomes for patients,” Shalek says.
The research was funded, in part, by a Fannie and John Hertz Foundation Fellowship, a National Science Foundation Graduate Research Fellowship, the National Institutes of Health, and the MIT Stem Cell Initiative through Foundation MIT.
Anything-goes “anyons” may be at the root of surprising quantum experimentsMIT physicists say these quasiparticles may explain how superconductivity and magnetism can coexist in certain materials.In the past year, two separate experiments in two different materials captured the same confounding scenario: the coexistence of superconductivity and magnetism. Scientists had assumed that these two quantum states are mutually exclusive; the presence of one should inherently destroy the other.
Now, theoretical physicists at MIT have an explanation for how this Jekyll-and-Hyde duality could emerge. In a paper appearing today in the Proceedings of the National Academy of Sciences, the team proposes that under certain conditions, a magnetic material’s electrons could splinter into fractions of themselves to form quasiparticles known as “anyons.” In certain fractions, the quasiparticles should flow together without friction, similar to how regular electrons can pair up to flow in conventional superconductors.
If the team’s scenario is correct, it would introduce an entirely new form of superconductivity — one that persists in the presence of magnetism and involves a supercurrent of exotic anyons rather than everyday electrons.
“Many more experiments are needed before one can declare victory,” says study lead author Senthil Todadri, the William and Emma Rogers Professor of Physics at MIT. “But this theory is very promising and shows that there can be new ways in which the phenomenon of superconductivity can arise.”
What’s more, if the idea of superconducting anyons can be confirmed and controlled in other materials, it could provide a new way to design stable qubits — atomic-scale “bits” that interact quantum mechanically to process information and carry out complex computations far more efficiently than conventional computer bits.
“These theoretical ideas, if they pan out, could make this dream one tiny step within reach,” Todadri says.
The study’s co-author is MIT physics graduate student Zhengyan Darius Shi.
“Anything goes”
Superconductivity and magnetism are macroscopic states that arise from the behavior of electrons. A material is a magnet when electrons in its atomic structure have roughly the same spin, or orbital motion, creating a collective pull in the form of a magnetic field within the material as a whole. A material is a superconductor when electrons passing through, in the form of voltage, can couple up in “Cooper pairs.” In this teamed-up state, electrons can glide through a material without friction, rather than randomly knocking against its atomic latticework.
For decades, it was thought that superconductivity and magnetism should not co-exist; superconductivity is a delicate state, and any magnetic field can easily sever the bonds between Cooper pairs. But earlier this year, two separate experiments proved otherwise. In the first experiment, MIT’s Long Ju and his colleagues discovered superconductivity and magnetism in rhombohedral graphene — a synthesized material made from four or five graphene layers.
“It was electrifying,” says Todadri, who recalls hearing Ju present the results at a conference. “It set the place alive. And it introduced more questions as to how this could be possible.”
Shortly after, a second team reported similar dual states in the semiconducting crystal molybdenium ditelluride (MoTe2). Interestingly, the conditions in which MoTe2 becomes superconductive happen to be the same conditions in which the material exhibits an exotic “fractional quantum anomalous Hall effect,” or FQAH — a phenomenon in which any electron passing through the material should split into fractions of itself. These fractional quasiparticles are known as “anyons.”
Anyons are entirely different from the two main types of particles that make up the universe: bosons and fermions. Bosons are the extroverted particle type, as they prefer to be together and travel in packs. The photon is the classic example of a boson. In contrast, fermions prefer to keep to themselves, and repel each other if they are too near. Electrons, protons, and neutrons are examples of fermions. Together, bosons and fermions are the two major kingdoms of particles that make up matter in the three-dimensional universe.
Anyons, in contrast, exist only in two-dimensional space. This third type of particle was first predicted in the 1980s, and its name was coined by MIT’s Frank Wilczek, who meant it as a tongue-in-cheek reference to the idea that, in terms of the particle’s behavior, “anything goes.”
A few years after anyons were first predicted, physicists such as Robert Laughlin PhD ’79, Wilczek, and others also theorized that, in the presence of magnetism, the quasiparticles should be able to superconduct.
“People knew that magnetism was usually needed to get anyons to superconduct, and they looked for magnetism in many superconducting materials,” Todadri says. “But superconductivity and magnetism typically do not occur together. So then they discarded the idea.”
But with the recent discovery that the two states can, in fact, peacefully coexist in certain materials, and in MoTe2 in particular, Todadri wondered: Could the old theory, and superconducting anyons, be at play?
Moving past frustration
Todadri and Shi set out to answer that question theoretically, building on their own recent work. In their new study, the team worked out the conditions under which superconducting anyons could emerge in a two-dimensional material. To do so, they applied equations of quantum field theory, which describes how interactions at the quantum scale, such as the level of individual anyons, can give rise to macroscopic quantum states, such as superconductivity. The exercise was not an intuitive one, since anyons are known to stubbornly resist moving, let alone superconducting, together.
“When you have anyons in the system, what happens is each anyon may try to move, but it’s frustrated by the presence of other anyons,” Todadri explains. “This frustration happens even if the anyons are extremely far away from each other. And that’s a purely quantum mechanical effect.”
Even so, the team looked for conditions in which anyons might break out of this frustration and move as one macroscopic fluid. Anyons are formed when electrons splinter into fractions of themselves under certain conditions in two-dimensional, single-atom-thin materials, such as MoTe2. Scientists had previously observed that MoTe2 exhibits the FQAH, in which electrons fractionalize, without the help of an external magnetic field.
Todadri and Shi took MoTe2 as a starting point for their theoretical work. They modeled the conditions in which the FQAH phenomenon emerged in MoTe2, and then looked to see how electrons would splinter, and what types of anyons would be produced, as they theoretically increased the number of electrons in the material.
They noted that, depending on the material’s electron density, two types of anyons can form: anyons with either 1/3 or 2/3 the charge of an electron. They then applied equations of quantum field theory to work out how either of the two anyon types would interact, and found that when the anyons are mostly of the 1/3 flavor, they are predictably frustrated, and their movement leads to ordinary metallic conduction. But when anyons are mostly of the 2/3 flavor, this particular fraction encourages the normally stodgy anyons to instead move collectively to form a superconductor, similar to how electrons can pair up and flow in conventional superconductors.
“These anyons break out of their frustration and can move without friction,” Todadri says. “The amazing thing is, this is an entirely different mechanism by which a superconductor can form, but in a way that can be described as Cooper pairs in any other system.”
Their work revealed that superconducting anyons can emerge at certain electron densities. What’s more, they found that when superconducting anyons first emerge, they do so in a totally new pattern of swirling supercurrents that spontaneously appear in random locations throughout the material. This behavior is distinct from conventional superconductors and is an exotic state that experimentalists can look for as a way to confirm the team’s theory. If their theory is correct, it would introduce a new form of superconductivity, through the quantum interactions of anyons.
“If our anyon-based explanation is what is happening in MoTe2, it opens the door to the study of a new kind of quantum matter which may be called ‘anyonic quantum matter,’” Todadri says. “This will be a new chapter in quantum physics.”
This research was supported, in part, by the National Science Foundation.
Prefrontal cortex reaches back into the brain to shape how other regions functionResearch illustrates how areas within the brain’s executive control center tailor messages in specific circuits with other brain regions to influence them with information about behavior and feelings.Vision shapes behavior and, a new study by MIT neuroscientists finds, behavior and internal states shape vision. The new research, published Nov. 25 in Neuron, finds in mice that via specific circuits, the brain’s executive control center, the prefrontal cortex, sends tailored messages to regions governing vision and motion to ensure that their work is shaped by contexts such as the mouse’s level of arousal and whether they are on the move.
“That’s the major conclusion of this paper: There are targeted projections for targeted impact,” says senior author Mriganka Sur, the Paul and Lilah Newton Professor in The Picower Institute for Learning and Memory and MIT’s Department of Brain and Cognitive Sciences.
Neuroscientists, including Sur’s next-door office neighbor at MIT, Earl K. Miller, have long suggested that the prefrontal cortex (PFC) biases the work of regions further back in the cortex. Tracing of anatomical circuits supports this idea. But in the new study, lead author and Sur Lab postdoc Sofie Ährlund-Richter sought to determine whether the PFC is broadcasting a generic signal or customizes the information it conveys for different downstream regions. She also wanted to take a fresh look at which neurons the PFC talks to, and what impact the information has on how those regions function.
Ährlund-Richter and Sur’s team uncovered several new revelations. One was that the two prefrontal areas they focused on, the orbitofrontal cortex (ORB) and the anterior cingulate area (ACA), selectively convey information about arousal and motion to the two downstream regions they studied, the primary visual cortex (VISp) and the primary motor cortex (MOp), to achieve distinct ends. For instance, the more aroused a mouse was, the more ACA prompted VISp to sharpen the focus of visual information it represented, but ORB only chimed in if arousal was very high, and then its input seemed to reduce the sharpness of visual encoding. Ährlund-Richter speculates that as arousal increases, ACA may help the visual cortex focus on resolving what might be salient in what it’s seeing, while ORB might be suppressing focus on unimportant distractors.
“These two PFC subregions are kind of balancing each other,” Ährlund-Richter says. “While one will enhance stimuli that might be more uncertain or more difficult to detect, the other one kind of dampens strong stimuli that might be irrelevant.”
In the study, Ährlund-Richter performed detailed anatomical tracings of the circuits that ACA and ORB forge with VISp and MOp to map their connections. In other experiments, mice were free to run on a wheel as they also watched both structured images or naturalistic movies at varying levels of contrast. Sometimes the mice received little air puffs that made them more aroused. Meanwhile, the neuroscientists tracked the activity of neurons in ACA, ORB, VISp, and MOp. In particular, they eavesdropped on the information flowing through the neural projections (or “axons”) that extended from the prefrontal to the posterior regions.
The anatomical tracings showed that complementary with some prior studies, the ACA and ORB each connect to many different types of cells in the target regions, not just one cell type. But they do so with distinct geographies. In VISp, for instance, ACA tapped in to layer 6, whereas ORB tapped into layer 5.
In their analysis of the transmitted information and neural activity, the scientists could discern several trends. ACA neurons conveyed more visual information than the ORB neurons and were more sensitive to changes in contrast. ACA neurons also scaled with arousal state, while ORB neurons seemed to only care if arousal crossed a high threshold. Meanwhile, when “talking” to MOp, the ACA and ORB each conveyed information about running speed, but with VISp, the regions only conveyed whether the mouse was moving or not. Finally, ACA and ORB also conveyed arousal state and a trickle of visual information to MOp.
To understand what effect this information flow had on visual function, the scientists sometimes blocked the circuits that ACA and ORB forged with VISp to see how that changed what VISp neurons did. That’s how they found that ACA and ORB affected visual encoding in specific and opposite ways, based on the mouse’s arousal level and movement.
“Our data support a model of PFC feedback that is specialized at both the level of PFC subregions and their targets, enabling each region to selectively shape target-specific cortical activity rather than modulating it globally,” the authors wrote in Neuron.
In addition to Sur and Ährlund-Richter, the paper’s other authors are Yuma Osako, Kyle R. Jenks, Emma Odom, Haoyang Huang, and Don B. Arnold.
Funding for the study came from a Wenner-Gren foundations Postdoctoral Fellowship, the National Institutes of Health, and the Freedom Together Foundation.
“Wait, we have the tech skills to build that”From robotics to apps like “NerdXing,” senior Julianna Schneider is building technologies to solve problems in her community.Students can take many possible routes through MIT’s curriculum, which can zigag through different departments, linking classes and disciplines in unexpected ways. With so many options, charting an academic path can be overwhelming, but a new tool called NerdXing is here to help.
The brainchild of senior Julianna Schneider and other students in the MIT Schwarzman College of Computing Undergraduate Advisory Group (UAG), NerdXing lets students search for a class and see all the other classes students have gone on to take in the past, including options that are off the beaten track.
“I hope that NerdXing will democratize course knowledge for everyone,” Schneider says. “I hope that for anyone who's a freshman and maybe hasn't picked their major yet, that they can go to NerdXing and start with a class that they would maybe never consider — and then discover that, ‘Oh wait, this is perfect for this really particular thing I want to study.’”
As a student double-majoring in artificial intelligence and decision-making and in mathematics, and doing research in the Biomimetic Robotics Laboratory in the Department of Mechanical Engineering, Schneider knows the benefits of interdisciplinary studies. It’s a part of the reason why she joined the UAG, which advises the MIT Schwarzman College of Computing’s leadership as it advances education and research at the intersections between computing, engineering, the arts, and more.
Through all of her activities, Schneider seeks to make people’s lives better through technology.
“This process of finding a problem in my community and then finding the right technology to solve that — that sort of approach and that framework is what guides all the things I do,” Schneider says. “And even in robotics, the things that I care about are guided by the sort of skills that I think we need to develop to be able to have meaningful applications.”
From Albania to MIT
Before she ever touched a robot or wrote code, Schneider was an accomplished young classical pianist in Albania. When she discovered her passion for robotics at age 13, she applied some of the skills she had learned while playing piano.
“I think on some fundamental level, when I was a pianist, I thought constantly about my motor dynamics as a human being, and how I execute really complex skills but do it over and over again at the top of my ability,” Schneider says. “When it came to robotics, I was building these robotic arms that also had to operate at the top of their ability every time and do really complex tasks. It felt kind of similar to me, like a fun crossover.”
Schneider joined her high school’s robotics team as a middle schooler, and she was so immediately enamored that she ended up taking over most of the coding and building of the team’s robot. She went on to win 14 regional and national awards across the three teams she led throughout middle and high school. It was clear to her that she’d found her calling.
NerdXing wasn’t Schneider’s first experience building new technology. At just 16, she built an app meant to connect English-speaking volunteers from her international school in Tirana, Albania, to local charities that only posted jobs in Albanian. By last year, the platform, called VoluntYOU, had 18 ambassadors across four continents. It has enabled volunteers to give out more than 2,000 burritos in Reno, Nevada; register hundreds of signatures to support women’s rights legislation in Albania; and help with administering Covid-19 vaccines to more than 1,200 individuals a day in Italy.
Schneider says her experience at an international school encouraged her to recognize problems and solutions all around her.
“When I enter a new community and I can immediately be like, ‘Oh wait, if we had this tool, that would be so cool and that would help all these people,’ I think that’s just a derivative of having grown up in a place where you hear about everyone’s super different life experiences,” she says.
Schneider describes NerdXing as a continuation of many of the skills she picked up while building VoluntYOU.
“They were both motivated by seeing a challenge where I thought, ‘Wait, we have the tech skills to build that. This is something that I can envision the solution to.’ And then I wanted to actually go and make that a reality,” Schneider says.
Robotics with a positive impact
At MIT, Schneider started working in the Biomimetic Robotics Laboratory of Professor Sangbae Kim, where she has now participated in three research projects, one of which she’s co-authoring a paper on. She’s part of a team that tests how robots, including the famous back-flipping mini cheetah, move, in order to see how they could complement humans in high-stakes scenarios.
Most of her work has revolved around crafting controllers, including one hybrid-learning and model-based controller that is well-suited to robots with limited onboard computing capacity. It would allow the robot to be used in regions with less access to technology.
“It’s not just doing technology for technology's sake, but because it will bridge out into the world and make a positive difference. I think legged robotics have some of the best potential to actually be a robotic partner to human beings in the scenarios that are most high-stakes,” Schneider says.
Schneider hopes to further robotic capabilities so she can find applications that will service communities around the world. One of her goals is to help create tools that allow a surgeon to operate on a patient a long distance away.
To take a break from academics, Schneider has channeled her love of the arts into MIT’s vibrant social dancing scene. This year, she’s especially excited about country line dancing events where the music comes on and students have to guess the choreography.
“I think it's a really fun way to make friends and to connect with the community,” she says.
Post-COP30, more aggressive policies needed to cap global warming at 1.5 CGlobal Change Outlook report for 2025 shows how accelerated action can reduce climate risks and improve sustainability outcomes, while highlighting potential geopolitical hurdles.The latest United Nations Climate Change Conference (COP30) concluded in November without a roadmap to phase out fossil fuels and without significant progress in strengthening national pledges to reduce climate-altering greenhouse gas emissions. In aggregate, today’s climate policies remain far too unambitious to meet the Paris Agreement’s goal of capping global warming at 1.5 degrees Celsius, setting the world on course to experience more frequent and intense storms, flooding, droughts, wildfires, and other climate impacts. A global policy regime aligned with the 1.5 C target would almost certainly reduce the severity of those impacts.
In the “2025 Global Change Outlook,” researchers at the MIT Center for Sustainability Science and Strategy (CS3) compare the consequences of these two approaches to climate policy through modeled projections of critical natural and societal systems under two scenarios. The Current Trends scenario represents the researchers’ assessment of current measures for reducing greenhouse gas (GHG) emissions; the Accelerated Actions scenario is a credible pathway to stabilizing the climate at a global mean surface temperature of 1.5 C above preindustrial levels, in which countries impose more aggressive GHG emissions-reduction targets.
By quantifying the risks posed by today’s climate policies — and the extent to which accelerated climate action aligned with the 1.5 C goal could reduce them — the “Global Change Outlook” aims to clarify what’s at stake for environments and economies around the world. Here, we summarize the report’s key findings at the global level; regional details can also be accessed in several sections and through MIT CS3’s interactive global visualization tool.
Emerging headwinds for global climate action
Projections under Current Trends show higher GHG emissions than in our previous 2023 outlook, indicating reduced action on GHG emissions mitigation in the upcoming decade. The difference, roughly equivalent to the annual emissions from Brazil or Japan, is driven by current geopolitical events.
Additional analysis in this report indicates that global GHG emissions in 2050 could be 10 percent higher than they would be under Current Trends if regional rivalries triggered by U.S. tariff policy prompt other regions to weaken their climate regulations. In that case, the world would see virtually no emissions reduction in the next 25 years.
Energy and electricity projections
Between 2025 and 2050, global energy consumption rises by 17 percent under Current Trends, with a nearly nine-fold increase in wind and solar. Under Accelerated Actions, global energy consumption declines by 16 percent, with a nearly 13-fold increase in wind and solar, driven by improvements in energy efficiency, wider use of electricity, and demand response. In both Current Trends and Accelerated Actions, global electricity consumption increases substantially (by 90 percent and 100 percent, respectively), with generation from low-carbon sources becoming a dominant source of power, though Accelerated Actions has a much larger share of renewables.
“Achieving long-term climate stabilization goals will require more ambitious policy measures that reduce fossil-fuel dependence and accelerate the energy transition toward low-carbon sources in all regions of the world. Our Accelerated Actions scenario provides a pathway for scaling up global climate ambition,” says MIT CS3 Deputy Director Sergey Paltsev, co-lead author of the report.
Greenhouse gas emissions and climate projections
Under Current Trends, global anthropogenic (human-caused) GHG emissions decline by 10 percent between 2025 and 2050, but start to rise again later in the century; under Accelerated Actions, however, they fall by 60 percent by 2050. Of the two scenarios, only the latter could put the world on track to achieve long-term climate stabilization.
Median projections for global warming by 2050, 2100, and 2150 are projected to reach 1.79, 2.74, and 3.72 degrees C (relative to the global mean surface temperature (GMST) average for the years 1850-1900) under Current Trends and 1.62, 1.56, and 1.50 C under Accelerated Actions. Median projections for global precipitation show increases from 2025 levels of 0.04, 0.11, and 0.18 millimeters per day in 2050, 2100, and 2150 under Current Trends and 0.03, 0.04, and 0.03 mm/day for those years under Accelerated Actions.
“Our projections demonstrate that aggressive cuts in GHG emissions can lead to substantial reductions in the upward trends of GMST, as well as global precipitation,” says CS3 deputy director C. Adam Schlosser, co-lead author of the outlook. “These reductions to both climate warming and acceleration of the global hydrologic cycle lower the risks of damaging impacts, particularly toward the latter half of this century.”
Implications for sustainability
The report’s modeled projections imply significantly different risk levels under the two scenarios for water availability, biodiversity, air quality, human health, economic well-being, and other sustainability indicators.
Among the key findings: Policies that align with Accelerated Actions could yield substantial co-benefits for water availability, biodiversity, air quality, and health. For example, combining Accelerated Actions-aligned climate policies with biodiversity targets, or with air-quality targets, could achieve biodiversity and air quality/health goals more efficiently and cost-effectively than a more siloed approach. The outlook’s analysis of the global economy under Current Trends suggests that decision-makers need to account for climate impacts outside their home region and the resilience of global supply chains.
Finally, CS3’s new data-visualization platform provides efficient, screening-level mapping of current and future climate, socioeconomic, and demographic-related conditions and changes — including global mapping for many of the model outputs featured in this report.
“Our comparison of outcomes under Current Trends and Accelerated Actions scenarios highlights the risks of remaining on the world’s current emissions trajectory and the benefits of pursuing a much more aggressive strategy,” says CS3 Director Noelle Selin, a co-author of the report and a professor in the Institute for Data, Systems and Society and Department of Earth, Atmospheric and Planetary Sciences at MIT. “We hope that our risk-benefit analysis will help inform decision-makers in government, industry, academia, and civil society as they confront sustainability-relevant challenges.”
A “scientific sandbox” lets researchers explore the evolution of vision systemsThe AI-powered tool could inform the design of better sensors and cameras for robots or autonomous vehicles.Why did humans evolve the eyes we have today?
While scientists can’t go back in time to study the environmental pressures that shaped the evolution of the diverse vision systems that exist in nature, a new computational framework developed by MIT researchers allows them to explore this evolution in artificial intelligence agents.
The framework they developed, in which embodied AI agents evolve eyes and learn to see over many generations, is like a “scientific sandbox” that allows researchers to recreate different evolutionary trees. The user does this by changing the structure of the world and the tasks AI agents complete, such as finding food or telling objects apart.
This allows them to study why one animal may have evolved simple, light-sensitive patches as eyes, while another has complex, camera-type eyes.
The researchers’ experiments with this framework showcase how tasks drove eye evolution in the agents. For instance, they found that navigation tasks often led to the evolution of compound eyes with many individual units, like the eyes of insects and crustaceans.
On the other hand, if agents focused on object discrimination, they were more likely to evolve camera-type eyes with irises and retinas.
This framework could enable scientists to probe “what-if” questions about vision systems that are difficult to study experimentally. It could also guide the design of novel sensors and cameras for robots, drones, and wearable devices that balance performance with real-world constraints like energy efficiency and manufacturability.
“While we can never go back and figure out every detail of how evolution took place, in this work we’ve created an environment where we can, in a sense, recreate evolution and probe the environment in all these different ways. This method of doing science opens to the door to a lot of possibilities,” says Kushagra Tiwary, a graduate student at the MIT Media Lab and co-lead author of a paper on this research.
He is joined on the paper by co-lead author and fellow graduate student Aaron Young; graduate student Tzofi Klinghoffer; former postdoc Akshat Dave, who is now an assistant professor at Stony Brook University; Tomaso Poggio, the Eugene McDermott Professor in the Department of Brain and Cognitive Sciences, an investigator in the McGovern Institute, and co-director of the Center for Brains, Minds, and Machines; co-senior authors Brian Cheung, a postdoc in the Center for Brains, Minds, and Machines and an incoming assistant professor at the University of California San Francisco; and Ramesh Raskar, associate professor of media arts and sciences and leader of the Camera Culture Group at MIT; as well as others at Rice University and Lund University. The research appears today in Science Advances.
Building a scientific sandbox
The paper began as a conversation among the researchers about discovering new vision systems that could be useful in different fields, like robotics. To test their “what-if” questions, the researchers decided to use AI to explore the many evolutionary possibilities.
“What-if questions inspired me when I was growing up to study science. With AI, we have a unique opportunity to create these embodied agents that allow us to ask the kinds of questions that would usually be impossible to answer,” Tiwary says.
To build this evolutionary sandbox, the researchers took all the elements of a camera, like the sensors, lenses, apertures, and processors, and converted them into parameters that an embodied AI agent could learn.
They used those building blocks as the starting point for an algorithmic learning mechanism an agent would use as it evolved eyes over time.
“We couldn’t simulate the entire universe atom-by-atom. It was challenging to determine which ingredients we needed, which ingredients we didn’t need, and how to allocate resources over those different elements,” Cheung says.
In their framework, this evolutionary algorithm can choose which elements to evolve based on the constraints of the environment and the task of the agent.
Each environment has a single task, such as navigation, food identification, or prey tracking, designed to mimic real visual tasks animals must overcome to survive. The agents start with a single photoreceptor that looks out at the world and an associated neural network model that processes visual information.
Then, over each agent’s lifetime, it is trained using reinforcement learning, a trial-and-error technique where the agent is rewarded for accomplishing the goal of its task. The environment also incorporates constraints, like a certain number of pixels for an agent’s visual sensors.
“These constraints drive the design process, the same way we have physical constraints in our world, like the physics of light, that have driven the design of our own eyes,” Tiwary says.
Over many generations, agents evolve different elements of vision systems that maximize rewards.
Their framework uses a genetic encoding mechanism to computationally mimic evolution, where individual genes mutate to control an agent’s development.
For instance, morphological genes capture how the agent views the environment and control eye placement; optical genes determine how the eye interacts with light and dictate the number of photoreceptors; and neural genes control the learning capacity of the agents.
Testing hypotheses
When the researchers set up experiments in this framework, they found that tasks had a major influence on the vision systems the agents evolved.
For instance, agents that were focused on navigation tasks developed eyes designed to maximize spatial awareness through low-resolution sensing, while agents tasked with detecting objects developed eyes focused more on frontal acuity, rather than peripheral vision.
Another experiment indicated that a bigger brain isn’t always better when it comes to processing visual information. Only so much visual information can go into the system at a time, based on physical constraints like the number of photoreceptors in the eyes.
“At some point a bigger brain doesn’t help the agents at all, and in nature that would be a waste of resources,” Cheung says.
In the future, the researchers want to use this simulator to explore the best vision systems for specific applications, which could help scientists develop task-specific sensors and cameras. They also want to integrate LLMs into their framework to make it easier for users to ask “what-if” questions and study additional possibilities.
“There’s a real benefit that comes from asking questions in a more imaginative way. I hope this inspires others to create larger frameworks, where instead of focusing on narrow questions that cover a specific area, they are looking to answer questions with a much wider scope,” Cheung says.
This work was supported, in part, by the Center for Brains, Minds, and Machines and the Defense Advanced Research Projects Agency (DARPA) Mathematics for the Discovery of Algorithms and Architectures (DIAL) program.
New study suggests a way to rejuvenate the immune systemStimulating the liver to produce some of the signals of the thymus can reverse age-related declines in T-cell populations and enhance response to vaccination.As people age, their immune system function declines. T cell populations become smaller and can’t react to pathogens as quickly, making people more susceptible to a variety of infections.
To try to overcome that decline, researchers at MIT and the Broad Institute have found a way to temporarily program cells in the liver to improve T-cell function. This reprogramming can compensate for the age-related decline of the thymus, where T cell maturation normally occurs.
Using mRNA to deliver three key factors that usually promote T-cell survival, the researchers were able to rejuvenate the immune systems of mice. Aged mice that received the treatment showed much larger and more diverse T cell populations in response to vaccination, and they also responded better to cancer immunotherapy treatments.
If developed for use in patients, this type of treatment could help people lead healthier lives as they age, the researchers say.
“If we can restore something essential like the immune system, hopefully we can help people stay free of disease for a longer span of their life,” says Feng Zhang, the James and Patricia Poitras Professor of Neuroscience at MIT, who has joint appointments in the departments of Brain and Cognitive Sciences and Biological Engineering.
Zhang, who is also an investigator at the McGovern Institute for Brain Research at MIT, a core institute member at the Broad Institute of MIT and Harvard, an investigator in the Howard Hughes Medical Institute, and co-director of the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics at MIT, is the senior author of the new study. Former MIT postdoc Mirco Friedrich is the lead author of the paper, which appears today in Nature.
A temporary factory
The thymus, a small organ located in front of the heart, plays a critical role in T-cell development. Within the thymus, immature T cells go through a checkpoint process that ensures a diverse repertoire of T cells. The thymus also secretes cytokines and growth factors that help T cells to survive.
However, starting in early adulthood, the thymus begins to shrink. This process, known as thymic involution, leads to a decline in the production of new T cells. By the age of approximately 75, the thymus is greatly reduced.
“As we get older, the immune system begins to decline. We wanted to think about how can we maintain this kind of immune protection for a longer period of time, and that's what led us to think about what we can do to boost immunity,” Friedrich says.
Previous work on rejuvenating the immune system has focused on delivering T cell growth factors into the bloodstream, but that can have harmful side effects. Researchers are also exploring the possibility of using transplanted stem cells to help regrow functional tissue in the thymus.
The MIT team took a different approach: They wanted to see if they could create a temporary “factory” in the body that would generate the T-cell-stimulating signals that are normally produced by the thymus.
“Our approach is more of a synthetic approach,” Zhang says. “We're engineering the body to mimic thymic factor secretion.”
For their factory location, they settled on the liver, for several reasons. First, the liver has a high capacity for producing proteins, even in old age. Also, it’s easier to deliver mRNA to the liver than to most other organs of the body. The liver was also an appealing target because all of the body’s circulating blood has to flow through it, including T cells.
To create their factory, the researchers identified three immune cues that are important for T-cell maturation. They encoded these three factors into mRNA sequences that could be delivered by lipid nanoparticles. When injected into the bloodstream, these particles accumulate in the liver and the mRNA is taken up by hepatocytes, which begin to manufacture the proteins encoded by the mRNA.
The factors that the researchers delivered are DLL1, FLT-3, and IL-7, which help immature progenitor T cells mature into fully differentiated T cells.
Immune rejuvenation
Tests in mice revealed a variety of beneficial effects. First, the researchers injected the mRNA particles into 18-month-old mice, equivalent to humans in their 50s. Because mRNA is short-lived, the researchers gave the mice multiple injections over four weeks to maintain a steady production by the liver.
After this treatment, T cell populations showed significant increases in size and function.
The researchers then tested whether the treatment could enhance the animals’ response to vaccination. They vaccinated the mice with ovalbumin, a protein found in egg whites that is commonly used to study how the immune system responds to a specific antigen. In 18-month-old mice that received the mRNA treatment before vaccination, the researchers found that the population of cytotoxic T-cells specific to ovalbumin doubled, compared to mice of the same age that did not receive the mRNA treatment.
The mRNA treatment can also boost the immune system’s response to cancer immunotherapy, the researchers found. They delivered the mRNA treatment to 18-month-old mice, who were then implanted with tumors and treated with a checkpoint inhibitor drug. This drug, which targets the protein PD-L1, is designed to help take the brakes off the immune system and stimulate T cells to attack tumor cells.
Mice that received the treatment showed much higher survival rates and longer lifespan that those that received the checkpoint inhibitor drug but not the mRNA treatment.
The researchers found that all three factors were necessary to induce this immune enhancement; none could achieve all aspects of it on their own. They now plan to study the treatment in other animal models and to identify additional signaling factors that may further enhance immune system function. They also hope to study how the treatment affects other immune cells, including B cells.
Other authors of the paper include Julie Pham, Jiakun Tian, Hongyu Chen, Jiahao Huang, Niklas Kehl, Sophia Liu, Blake Lash, Fei Chen, Xiao Wang, and Rhiannon Macrae.
The research was funded, in part, by the Howard Hughes Medical Institute, the K. Lisa Yang Brain-Body Center, part of the Yang Tan Collective at MIT, Broad Institute Programmable Therapeutics Gift Donors, the Pershing Square Foundation, J. and P. Poitras, and an EMBO Postdoctoral Fellowship.
3 Questions: Using computation to study the world’s best single-celled chemistsAssistant Professor Yunha Hwang utilizes microbial genomes to examine the language of biology. Her appointment reflects MIT’s commitment to exploring the intersection of genetics research and AI.Today, out of an estimated 1 trillion species on Earth, 99.999 percent are considered microbial — bacteria, archaea, viruses, and single-celled eukaryotes. For much of our planet’s history, microbes ruled the Earth, able to live and thrive in the most extreme of environments. Researchers have only just begun in the last few decades to contend with the diversity of microbes — it’s estimated that less than 1 percent of known genes have laboratory-validated functions. Computational approaches offer researchers the opportunity to strategically parse this truly astounding amount of information.
An environmental microbiologist and computer scientist by training, new MIT faculty member Yunha Hwang is interested in the novel biology revealed by the most diverse and prolific life form on Earth. In a shared faculty position as the Samuel A. Goldblith Career Development Professor in the Department of Biology, as well as an assistant professor at the Department of Electrical Engineering and Computer Science and the MIT Schwarzman College of Computing, Hwang is exploring the intersection of computation and biology.
Q: What drew you to research microbes in extreme environments, and what are the challenges in studying them?
A: Extreme environments are great places to look for interesting biology. I wanted to be an astronaut growing up, and the closest thing to astrobiology is examining extreme environments on Earth. And the only thing that lives in those extreme environments are microbes. During a sampling expedition that I took part in off the coast of Mexico, we discovered a colorful microbial mat about 2 kilometers underwater that flourished because the bacteria breathed sulfur instead of oxygen — but none of the microbes I was hoping to study would grow in the lab.
The biggest challenge in studying microbes is that a majority of them cannot be cultivated, which means that the only way to study their biology is through a method called metagenomics. My latest work is genomic language modeling. We’re hoping to develop a computational system so we can probe the organism as much as possible “in silico,” just using sequence data. A genomic language model is technically a large language model, except the language is DNA as opposed to human language. It’s trained in a similar way, just in biological language as opposed to English or French. If our objective is to learn the language of biology, we should leverage the diversity of microbial genomes. Even though we have a lot of data, and even as more samples become available, we’ve just scratched the surface of microbial diversity.
Q: Given how diverse microbes are and how little we understand about them, how can studying microbes in silico, using genomic language modeling, advance our understanding of the microbial genome?
A: A genome is many millions of letters. A human cannot possibly look at that and make sense of it. We can program a machine, though, to segment data into pieces that are useful. That’s sort of how bioinformatics works with a single genome. But if you’re looking at a gram of soil, which can contain thousands of unique genomes, that’s just too much data to work with — a human and a computer together are necessary in order to grapple with that data.
During my PhD and master’s degree, we were only just discovering new genomes and new lineages that were so different from anything that had been characterized or grown in the lab. These were things that we just called “microbial dark matter.” When there are a lot of uncharacterized things, that’s where machine learning can be really useful, because we’re just looking for patterns — but that’s not the end goal. What we hope to do is to map these patterns to evolutionary relationships between each genome, each microbe, and each instance of life.
Previously, we’ve been thinking about proteins as a standalone entity — that gets us to a decent degree of information because proteins are related by homology, and therefore things that are evolutionarily related might have a similar function.
What is known about microbiology is that proteins are encoded into genomes, and the context in which that protein is bounded — what regions come before and after — is evolutionarily conserved, especially if there is a functional coupling. This makes total sense because when you have three proteins that need to be expressed together because they form a unit, then you might want them located right next to each other.
What I want to do is incorporate more of that genomic context in the way that we search for and annotate proteins and understand protein function, so that we can go beyond sequence or structural similarity to add contextual information to how we understand proteins and hypothesize about their functions.
Q: How can your research be applied to harnessing the functional potential of microbes?
A: Microbes are possibly the world’s best chemists. Leveraging microbial metabolism and biochemistry will lead to more sustainable and more efficient methods for producing new materials, new therapeutics, and new types of polymers.
But it’s not just about efficiency — microbes are doing chemistry we don’t even know how to think about. Understanding how microbes work, and being able to understand their genomic makeup and their functional capacity, will also be really important as we think about how our world and climate are changing. A majority of carbon sequestration and nutrient cycling is undertaken by microbes; if we don’t understand how a given microbe is able to fix nitrogen or carbon, then we will face difficulties in modeling the nutrient fluxes of the Earth.
On the more therapeutic side, infectious diseases are a real and growing threat. Understanding how microbes behave in diverse environments relative to the rest of our microbiome is really important as we think about the future and combating microbial pathogens.
MIT community members elected to the National Academy of Inventors for 2025Professors Ahmad Bahai and Kripa Varanasi, plus seven additional MIT alumni, are honored for highly impactful inventions.The National Academy of Inventors (NAI) has named nine MIT affiliates as members of the 2025 class of NAI Fellows. They include Ahmad Bahai, an MIT professor of the practice in the Department of Electrical Engineering and Computer Science (EECS), and Kripa K. Varanasi, MIT professor in the Department of Mechanical Engineering, as well as seven additional MIT alumni. NAI fellowship is the highest professional distinction awarded solely to inventors.
“NAI Fellows are a driving force within the innovation ecosystem, and their contributions across scientific disciplines are shaping the future of our world,” says Paul R. Sanberg, fellow and president of the National Academy of Inventors. “We are thrilled to welcome this year’s class of fellows to the academy.”
This year’s 169 U.S. fellows represent 127 universities, government agencies, and research institutions across 40 U.S. states. Together, the 2025 class hold more than 5,300 U.S. patents and include recipients of the Nobel Prize, the National Medal of Science and National Medal of Technology and Innovation, as well as members of the national academies of Sciences, Engineering, and Medicine, among others.
Ahmad Bahai is professor of the practice in EECS. He was an adjunct professor at Stanford University from 2017 to 2022 and a professor in residence at the University of California at Berkeley from 2001 to 2010. Bahai has held a number of leadership roles, including director of research labs and chief technology officer of National Semiconductor, technical manager of a research group at Bell Laboratories, and founder of Algorex, a communication and acoustic integrated circuit and system company, which was acquired by National Semiconductor.
Currently, Bahai is the chief technology officer and director of corporate research of Texas Instruments and director of Kilby Labs and corporate research, and is a member of the Industrial Advisory Committee of CHIPS Act. Bahai is an IEEE Fellow and an AIMBE Fellow; he has authored over 80 publications in IEEE/IEE journals and holds more than 40 patents related to systems and circuits.
He holds an MS in electrical engineering from Imperial College London and a doctorate degree in electrical engineering from UC Berkeley.
Kripa K. Varanasi SM ’02, PhD ’04, professor of mechanical engineering, is widely recognized for his significant contributions in the field of interfacial science, thermal fluids, electrochemical systems, advanced materials, and manufacturing. A member of the MIT faculty since 2009, he leads the interdisciplinary Varanasi Research Group, which focuses on understanding physico-chemical and biological phenomena at the interfaces of matter. His group develops innovative surfaces, materials, devices, processes, and associated technologies that improve efficiency and performance across industries, including energy, decarbonization, life sciences, water, agriculture, transportation, and consumer products.
Varanasi has also scaled basic research into practical, market-ready technologies. He has co-founded six companies, including AgZen, Alsym Energy, CoFlo Medical, Dropwise, Infinite Cooling, and LiquiGlide, and his companies have been widely recognized for driving innovation across a range of industries. Throughout his career, Varanasi has been recognized for excellence in research and mentorship. Honors include the National Science Foundation CAREER Award, DARPA Young Faculty Award, SME Outstanding Young Manufacturing Engineer Award, ASME’s Bergles-Rohsenow Heat Transfer Award and Gustus L. Larson Memorial Award, Boston Business Journal’s 40 Under 40, and MIT’s Frank E. Perkins Award for Excellence in Graduate Advising.
Varanasi earned his undergraduate degree in mechanical engineering from the Indian Institute of Technology Madras, and his master’s degree and PhD from MIT. Prior to joining the faculty, he served as lead researcher and project leader at the GE Global Research Center, where he received multiple internal awards for innovation, leadership, and technical excellence. He was recently named faculty director of the Deshpande Center for Technological Innovation.
The seven additional MIT alumni who were elected to the NAI for 2025 include:
The NAI Fellows program was founded in 2012 and has grown to include 2,253 distinguished researchers and innovators, who hold over 86,000 U.S. patents and 20,000 licensed technologies. Collectively, NAI Fellows’ innovations have generated an estimated $3.8 trillion in revenue and 1.4 million jobs.
The 2025 class will be honored and presented with their medals by a senior official of the United States Patent and Trademark Office at the NAI 15th Annual Conference on June 4, 2026, in Los Angeles.
RNA editing study finds many ways for neurons to diversifyTracking how fruit fly motor neurons edit their RNA, neurobiologists cataloged hundreds of target sites and varying editing rates, finding many edits altered communication- and function-related proteins.All starting from the same DNA, neurons ultimately take on individual characteristics in the brain and body. Differences in which genes they transcribe into RNA help determine which type of neuron they become, and from there, a new MIT study shows, individual cells edit a selection of sites in those RNA transcripts, each at their own widely varying rates.
The new study surveyed the whole landscape of RNA editing in more than 200 individual cells commonly used as models of fundamental neural biology: tonic and phasic motor neurons of the fruit fly. One of the main findings is that most sites were edited at rates between the “all-or-nothing” extremes many scientists have assumed based on more limited studies in mammals, says senior author Troy Littleton, the Menicon Professor in the MIT departments of Biology and Brain and Cognitive Sciences. The resulting dataset and open-access analyses, recently published in eLife, set the table for discoveries about how RNA editing affects neural function and what enzymes implement those edits.
“We have this ‘alphabet’ now for RNA editing in these neurons,” Littleton says. “We know which genes are edited in these neurons, so we can go in and begin to ask questions as to what is that editing doing to the neuron at the most interesting targets.”
Andres Crane PhD ’24, who earned his doctorate in Littleton’s lab based on this work, is the study’s lead author.
From a genome of about 15,000 genes, Littleton and Crane’s team found, the neurons made hundreds of edits in transcripts from hundreds of genes. For example, the team documented “canonical” edits of 316 sites in 210 genes. Canonical means that the edits were made by the well-studied enzyme ADAR, which is also found in mammals, including humans. Of the 316 edits, 175 occurred in regions that encode the contents of proteins. Analysis indeed suggested 60 are likely to significantly alter amino acids. But they also found 141 more editing sites in areas that don’t code for proteins but instead affect their production, which means they could affect protein levels, rather than their contents.
The team also found many “non-canonical” edits that ADAR didn’t make. That’s important, Littleton says, because that information could aid in discovering more enzymes involved in RNA editing, potentially across species. That, in turn, could expand the possibilities for future genetic therapies.
“In the future, if we can begin to understand in flies what the enzymes are that make these other non-canonical edits, it would give us broader coverage for thinking about doing things like repairing human genomes where a mutation has broken a protein of interest,” Littleton says.
Moreover, by looking specifically at fly larvae, the team found many edits that were specific to juveniles, versus adults, suggesting potential significance during development. And because they looked at full gene transcripts of individual neurons, the team was also able to find editing targets that had not been cataloged before.
Widely varying rates
Some of the most heavily edited RNAs were from genes that make critical contributions to neural circuit communication such as neurotransmitter release, and the channels that cells form to regulate the flow of chemical ions that vary their electrical properties. The study identified 27 sites in 18 genes that were edited more than 90 percent of the time.
Yet neurons sometimes varied quite widely in whether they would edit a site, which suggests that even neurons of the same type can still take on significant degrees of individuality.
“Some neurons displayed ~100 percent editing at certain sites, while others displayed no editing for the same target,” the team wrote in eLife. “Such dramatic differences in editing rate at specific target sites is likely to contribute to the heterogeneous features observed within the same neuronal population.”
On average, any given site was edited about two-thirds of the time, and most sites were edited within a range well between all-or-nothing extremes.
“The vast majority of editing events we found were somewhere between 20 percent and 70 percent,” Littleton says. “We were seeing mixed ratios of edited and unedited transcripts within a single cell.”
Also, the more a gene was expressed, the less editing it experienced, suggesting that ADAR could only keep up so much with its editing opportunities.
Potential impacts on function
One of the key questions the data enables scientists to ask is what impact RNA edits have on the function of the cells. In a 2023 study, Littleton’s lab began to tackle this question by looking at just two edits they found in the most heavily edited gene: complexin. Complexin’s protein product restrains release of the neurotransmitter glutamate, making it a key regulator of neural circuit communication. They found that by mixing and matching edits, neurons produced up to eight different versions of the protein with significant effects on their glutamate release and synaptic electrical current. But in the new study, the team reports 13 more edits in complexin that are yet to be studied.
Littleton says he’s intrigued by another key protein, called Arc1, that the study shows experienced a non-canonical edit. Arc is a vitally important gene in “synaptic plasticity,” which is the property neurons have of adjusting the strength or presence of their “synapse” circuit connections in response to nervous system activity. Such neural nimbleness is hypothesized to be the basis of how the brain can responsively encode new information in learning and memory. Notably, Arc1 editing fails to occur in fruit flies that model Alzheimer’s disease.
Littleton says the lab is now working hard to understand how the RNA edits they’ve documented affect function in the fly motor neurons.
In addition to Crane and Littleton, the study’s other authors are Michiko Inouye and Suresh Jetti.
The National Institutes of Health, The Freedom Together Foundation, and The Picower Institute for Learning and Memory provided support for the study.
In February, President Sally Kornbluth announced the appointment of Professor Angela Koehler as faculty director of the MIT Health and Life Sciences Collaborative (MIT HEALS), with professors Iain Cheeseman and Katharina Ribbeck as associate directors. Since then, the leadership team has moved quickly to shape HEALS into an ambitious, community-wide platform for catalyzing research, translation, and education at MIT and beyond — at a moment when advances in computation, biology, and engineering are redefining what’s possible in health and the life sciences.
Rooted in MIT’s long-standing strengths in foundational discovery, convergence, and translational science, HEALS is designed to foster connections across disciplines — linking life scientists and engineers with clinicians, computational scientists, humanists, operations researchers, and designers. The initiative builds on a simple premise: that solving today’s most pressing challenges in health and life sciences requires bold thinking, deep collaboration, and sustained investment in people.
“HEALS is an opportunity to rethink how we support talent, unlock scientific ideas, and translate them into impact,” says Koehler, the Charles W. and Jennifer C. Johnson Professor in the Department of Biological Engineering and associate director of the Koch Institute for Integrative Cancer Research. “We’re building on MIT’s best traditions — convergence, experimentation, and entrepreneurship — while opening new channels for interdisciplinary research and community building.”
Koehler says her own path has been shaped by that same belief in convergence. Early collaborations between chemists, engineers, and clinicians convinced her that bringing diverse people together — what she calls “induced proximity” — can spark discoveries that wouldn’t emerge in isolation.
A culture of connection
Since stepping into their roles, the HEALS leadership team has focused on building a collaborative ecosystem that enables researchers to take on bold, interdisciplinary challenges in health and life sciences. Rather than creating a new center or department, their approach emphasizes connecting the MIT community across existing boundaries — disciplinary, institutional, and cultural.
“We want to fund science that wouldn’t otherwise happen — projects that bridge gaps, open new doors, and bring researchers together in ways that are genuinely constructive and collaborative,” says Iain Cheeseman, the Herman and Margaret Sokol Professor of Biology, core member of the Whitehead Institute for Biomedical Research, and associate head of the Department of Biology.
That vision is already taking shape through initiatives like the MIT HEALS seed grants, which support bold new collaborations between MIT principal investigators; the MIT–Mass General Brigham Seed Program, which supports joint research between investigators at MIT and clinicians at MGB; and the Biswas Postdoctoral Fellowship Program, designed to bring top early-career researchers to MIT to pursue cross-cutting work in areas such as computational biology, biomedical engineering, and therapeutic discovery.
The leadership team sees these programs not as endpoints, but as starting points for a broader shift in how MIT supports health and life sciences research.
For Cheeseman, whose lab is working to build on their fundamental discoveries on how human cells function to impact cancer treatment and rare human disease, HEALS represents a way to connect deep biological discovery with the translational insights emerging from MIT’s engineering and clinical communities. He puts it simply: “to me, this is deeply personal, recognizing the limitations that existed for my own work and hoping to unlock these possibilities for researchers across MIT.”
Training the next generation
Ribbeck, a biologist focused on mucus and microbial ecosystems, sees HEALS as a way to train scientists who are as comfortable discussing patient needs as they are conducting experiments at the bench. She emphasizes that preparing the next generation of researchers means equipping them with fluency in areas like clinical language, regulatory processes, and translational pathways — skills many current investigators lack. “Many PIs, although they do clinical research, may not have dedicated support for taking their findings to the next level — how to design a clinical trial, or what regulatory questions need to be addressed — reflecting a broader structural gap in translational training” she says.
A central focus for the HEALS leadership team is building new models for training researchers to move fluidly between disciplines, institutions, and methods of translation. Ribbeck and Koehler stress the importance of giving students and postdocs hands-on opportunities that connect research with real-world experience. That means expanding programs like the Undergraduate Research Opportunities Program (UROP), the Advanced UROP (SuperUROP), and the MIT New Engineering Education Transformation, and creating new ways for trainees to engage with industry, clinical partners, and entrepreneurship. They are learning at the intersection of engineering, biology, and medicine — and increasingly across disciplines that span economics, design, the social sciences, and the humanities, where students are already creating collaborations that do not yet have formal pathways.
Koehler, drawing from her leadership at the Deshpande Center for Technological Innovation and the Koch Institute, notes that “if we invest in the people, the solutions to problems will naturally arise.” She envisions HEALS as a platform for induced proximity — not just of disciplines, but of people at different career stages, working together in environments that support both risk-taking and mentorship.
“For me, HEALS builds on what I’ve seen work at MIT — bringing people with different skill sets together to tackle challenges in life sciences and medicine,” she says. “It’s about putting community first and empowering the next generation to lead across disciplines.”
A platform for impact
Looking ahead, the HEALS leadership team envisions the collaborative as a durable platform for advancing health and life sciences at MIT. That includes launching flagship events, supporting high-risk, high-reward ideas, and developing partnerships across the biomedical ecosystem in Boston and beyond. As they see it, MIT is uniquely positioned for this moment: More than three-quarters of the Institute’s faculty work in areas that touch health and life sciences, giving HEALS a rare opportunity to bring that breadth together in new configurations and amplify impact across disciplines.
From the earliest conversations, the leaders have heard a clear message from faculty across MIT — a strong appetite for deeper connection, for working across boundaries, and for tackling urgent societal challenges together. That shared sense of momentum is what gave rise to HEALS, and it now drives the team’s focus on building the structures that can support a community that wants to collaborate at scale.
“Faculty across MIT are already reaching out — looking to connect with clinics, collaborate on new challenges, and co-create solutions,” says Koehler. “That hunger for connection is why HEALS was created. Now we have to build the structures that support it.”
Cheeseman adds that this collaborative model is what makes MIT uniquely positioned to lead. “When you bring together people from different fields who are motivated by impact,” he says, “you create the conditions for discoveries that none of us could achieve alone.”
Enabling small language models to solve complex reasoning tasksThe “self-steering” DisCIPL system directs small models to work together on tasks with constraints, like itinerary planning and budgeting.As language models (LMs) improve at tasks like image generation, trivia questions, and simple math, you might think that human-like reasoning is around the corner. In reality, they still trail us by a wide margin on complex tasks. Try playing Sudoku with one, for instance, where you fill in numbers one through nine in such a way that each appears only once across the columns, rows, and sections of a nine-by-nine grid. Your AI opponent will either fail to fill in boxes on its own or do so inefficiently, although it can verify if you’ve filled yours out correctly.
Whether an LM is trying to solve advanced puzzles, design molecules, or write math proofs, the system struggles to answer open-ended requests that have strict rules to follow. The model is better at telling users how to approach these challenges than attempting them itself. Moreover, hands-on problem-solving requires LMs to consider a wide range of options while following constraints. Small LMs can’t do this reliably on their own; large language models (LLMs) sometimes can, particularly if they’re optimized for reasoning tasks, but they take a while to respond, and they use a lot of computing power.
This predicament led researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) to develop a collaborative approach where an LLM does the planning, then divvies up the legwork of that strategy among smaller ones. Their method helps small LMs provide more accurate responses than leading LLMs like OpenAI’s GPT-4o, and approach the precision of top reasoning systems such as o1, while being more efficient than both. Their framework, called “Distributional Constraints by Inference Programming with Language Models” (or “DisCIPL”), has a large model steer smaller “follower” models toward precise responses when writing things like text blurbs, grocery lists with budgets, and travel itineraries.
The inner workings of DisCIPL are much like contracting a company for a particular job. You provide a “boss” model with a request, and it carefully considers how to go about doing that project. Then, the LLM relays these instructions and guidelines in a clear way to smaller models. It corrects follower LMs’ outputs where needed — for example, replacing one model’s phrasing that doesn’t fit in a poem with a better option from another.
The LLM communicates with its followers using a language they all understand — that is, a programming language for controlling LMs called “LLaMPPL.” Developed by MIT's Probabilistic Computing Project in 2023, this program allows users to encode specific rules that steer a model toward a desired result. For example, LLaMPPL can be used to produce error-free code by incorporating the rules of a particular language within its instructions. Directions like “write eight lines of poetry where each line has exactly eight words” are encoded in LLaMPPL, queuing smaller models to contribute to different parts of the answer.
MIT PhD student Gabriel Grand, who is the lead author on a paper presenting this work, says that DisCIPL allows LMs to guide each other toward the best responses, which improves their overall efficiency. “We’re working toward improving LMs’ inference efficiency, particularly on the many modern applications of these models that involve generating outputs subject to constraints,” adds Grand, who is also a CSAIL researcher. “Language models are consuming more energy as people use them more, which means we need models that can provide accurate answers while using minimal computing power.”
“It's really exciting to see new alternatives to standard language model inference,” says University of California at Berkeley Assistant Professor Alane Suhr, who wasn’t involved in the research. “This work invites new approaches to language modeling and LLMs that significantly reduce inference latency via parallelization, require significantly fewer parameters than current LLMs, and even improve task performance over standard serialized inference. The work also presents opportunities to explore transparency, interpretability, and controllability of model outputs, which is still a huge open problem in the deployment of these technologies.”
An underdog story
You may think that larger-scale LMs are “better” at complex prompts than smaller ones when it comes to accuracy and efficiency. DisCIPL suggests a surprising counterpoint for these tasks: If you can combine the strengths of smaller models instead, you may just see an efficiency bump with similar results.
The researchers note that, in theory, you can plug in dozens of LMs to work together in the DisCIPL framework, regardless of size. In writing and reasoning experiments, they went with GPT-4o as their “planner LM,” which is one of the models that helps ChatGPT generate responses. It brainstormed a plan for several “Llama-3.2-1B” models (smaller systems developed by Meta), in which those LMs filled in each word (or token) of the response.
This collective approach competed against three comparable ones: a follower-only baseline powered by Llama-3.2-1B, GPT-4o working on its own, and the industry-leading o1 reasoning system that helps ChatGPT figure out more complex questions, such as coding requests and math problems.
DisCIPL first presented an ability to write sentences and paragraphs that follow explicit rules. The models were given very specific prompts — for example, writing a sentence that has exactly 18 words, where the fourth word must be “Glasgow,” the eighth should be “in”, and the 11th must be “and.” The system was remarkably adept at handling this request, crafting coherent outputs while achieving accuracy and coherence similar to o1.
Faster, cheaper, better
This experiment also revealed that key components of DisCIPL were much cheaper than state-of-the-art systems. For instance, whereas existing reasoning models like OpenAI’s o1 perform reasoning in text, DisCIPL “reasons” by writing Python code, which is more compact. In practice, the researchers found that DisCIPL led to 40.1 percent shorter reasoning and 80.2 percent cost savings over o1.
DisCIPL’s efficiency gains stem partly from using small Llama models as followers, which are 1,000 to 10,000 times cheaper per token than comparable reasoning models. This means that DisCIPL is more “scalable” — the researchers were able to run dozens of Llama models in parallel for a fraction of the cost.
Those weren’t the only surprising findings, according to CSAIL researchers. Their system also performed well against o1 on real-world tasks, such as making ingredient lists, planning out a travel itinerary, and writing grant proposals with word limits. Meanwhile, GPT-4o struggled with these requests, and with writing tests, it often couldn’t place keywords in the correct parts of sentences. The follower-only baseline essentially finished in last place across the board, as it had difficulties with following instructions.
“Over the last several years, we’ve seen some impressive results from approaches that use language models to ‘auto-formalize’ problems in math and robotics by representing them with code,” says senior author Jacob Andreas, who is an MIT electrical engineering and computer science associate professor and CSAIL principal investigator. “What I find most exciting about this paper is the fact that we can now use LMs to auto-formalize text generation itself, enabling the same kinds of efficiency gains and guarantees that we’ve seen in these other domains.”
In the future, the researchers plan on expanding this framework into a more fully-recursive approach, where you can use the same model as both the leader and followers. Grand adds that DisCIPL could be extended to mathematical reasoning tasks, where answers are harder to verify. They also intend to test the system on its ability to meet users’ fuzzy preferences, as opposed to following hard constraints, which can’t be outlined in code so explicitly. Thinking even bigger, the team hopes to use the largest possible models available, although they note that such experiments are computationally expensive.
Grand and Andreas wrote the paper alongside CSAIL principal investigator and MIT Professor Joshua Tenenbaum, as well as MIT Department of Brain and Cognitive Sciences Principal Research Scientist Vikash Mansinghka and Yale University Assistant Professor Alex Lew SM ’20 PhD ’25. CSAIL researchers presented the work at the Conference on Language Modeling in October and IVADO’s “Deploying Autonomous Agents: Lessons, Risks and Real-World Impact” workshop in November.
Their work was supported, in part, by the MIT Quest for Intelligence, Siegel Family Foundation, the MIT-IBM Watson AI Lab, a Sloan Research Fellowship, Intel, the Air Force Office of Scientific Research, the Defense Advanced Research Projects Agency, the Office of Naval Research, and the National Science Foundation.
The School of Science welcomed 11 new faculty members in 2024.
Shaoyun Bai researches symplectic topology, the study of even-dimensional spaces whose properties are reflected by two-dimensional surfaces inside them. He is interested in this area’s interaction with other fields, including algebraic geometry, algebraic topology, geometric topology, and dynamics. He has been developing new tool kits for counting problems from moduli spaces, which have been applied to classical questions, including the Arnold conjecture, periodic points of Hamiltonian maps, higher-rank Casson invariants, enumeration of embedded curves, and topology of symplectic fibrations.
Bai completed his undergraduate studies at Tsinghua University in 2017 and earned his PhD in mathematics from Princeton University in 2022, advised by John Pardon. Bai then held visiting positions at MSRI (now known as Simons Laufer Mathematical Sciences Institute) as a McDuff Postdoctoral Fellow and at the Simons Center for Geometry and Physics, and he was a Ritt Assistant Professor at Columbia University. He joined the MIT Department of Mathematics as an assistant professor in 2024.
Abigail Bodner investigates turbulence in the upper ocean using remote sensing measurements, in-situ ocean observations numerical simulations, climate models, and machine learning. Her research explores how the small-scale physics of turbulence near the ocean surface impacts the large-scale climate.
Bodner earned a BS and MS from Tel Aviv University studying mathematics and geophysics, atmospheric and planetary sciences. She then went on to Brown University, earning an MS in applied mathematics before completing her PhD studies in 2021 in Earth, environmental, and planetary science. Prior to coming to MIT, Bodner was a Simons Society Junior Fellow at New York University. Abigail Bodner is an assistant professor in the Department of Earth, Atmospheric, and Planetary Sciences, holding an MIT Schwarzman College of Computing shared position with the Department of Electrical Engineering and Computer Science.
Jacopo Borga is interested in probability theory and its connections to combinatorics, and in mathematical physics. He studies various random combinatorial structures — mathematical objects such as graphs or permutations — and their patterns and behavior at a large scale. This research includes random permutons, meanders, multidimensional constrained Brownian motions, Schramm-Loewner evolutions, and Liouville quantum gravity.
Borga earned bachelor’s and master’s degrees in mathematics from the Università degli Studi di Padova, and a master’s degree in mathematics from Université Sorbonne Paris Cité (USPC), then proceeded to complete a PhD in mathematics at Unstitut für Mathematik at the Universität Zürich. Borga was an assistant professor at Stanford University before joining MIT as an assistant professor of mathematics in 2024.
Linlin Fan aims to decipher the neural codes underlying learning and memory and to identify the physical basis of learning and memory. Her research focus is on the learning rules of brain circuits — what kinds of activity trigger the encoding and storing of information — how these learning rulers are implemented, and how memories can be inferred from mapping neural functional connectivity patterns. To answer these questions, Fan’s group leverages high-precision, all-optical technologies to map and control the electrical charges of neurons within the brain.
Fan earned her PhD at Harvard University after undergraduate studies at Peking University in China. She joined the MIT Department of Brain and Cognitive Sciences as the Samuel A. Goldblith Career Development Professor of Applied Biology, and the Picower Institute for Learning and Memory as an investigator in January 2024. Previously, Fan worked as a postdoc at Stanford University.
Whitney Henry investigates ferroptosis, a type of cell death dependent on iron, to uncover how oxidative stress, metabolism, and immune signaling intersect to shape cell fate decisions. Her research has defined key lipid metabolic and iron homeostatic programs that regulate ferroptosis susceptibility. By uncovering the molecular factors influencing ferroptosis susceptibility, investigating its effects on the tumor microenvironment, and developing innovative methods to manipulate ferroptosis resistance in living organisms, Henry’s lab aims to gain a comprehensive understanding of the therapeutic potential of ferroptosis, especially to target highly metastatic, therapy-resistant cancer cells.
Henry received her bachelor's degree in biology with a minor in chemistry from Grambling State University and her PhD from Harvard University. Following her doctoral studies, she worked at the Whitehead Institute for Biomedical Research and was supported by fellowships from the Jane Coffin Childs Memorial Fund for Medical Research and the Ludwig Center at MIT. Henry joined the MIT faculty in 2024 as an assistant professor in the Department of Biology and a member of the Koch Institute for Integrative Cancer Research, and was recently named the Robert A. Swanson (1969) Career Development Professor of Life Sciences and a HHMI Freeman Hrabowski Scholar.
Gian Michele Innocenti is an experimental physicist who probes new regimes of quantum chromodynamics (QCD) through collisions of ultra relativistic heavy ions at the Large Hadron Collider. He has developed advanced analysis techniques and data-acquisition strategies that enable novel measurements of open heavy-flavor and jet production in hadronic and ultraperipheral heavy-ion collisions, shedding light on the properties of high-temperature QCD matter and parton dynamics in Lorentz-contracted nuclei. He leads the MIT Pixel𝜑 program, which exploits CMOS MAPS technology to build a high-precision tracking detector for the ePIC experiment at the Electron–Ion Collider.
Innocenti received his PhD in particle and nuclear physics at the University of Turin in Italy in early 2014. He then joined the MIT heavy-ion group in the Laboratory of Nuclear Science in 2014 as a postdoc, followed by a staff research physicist position at CERN in 2018. Innocenti joined the MIT Department of Physics as an assistant professor in January 2024.
Mathematician Christoph Kehle's research interests lie at the intersection of analysis, geometry, and partial differential equations. In particular, he focuses on the Einstein field equations of general relativity and our current understanding of gravitation, which describe how matter and energy shape spacetime. His work addresses the Strong Cosmic Censorship conjecture, singularities in black hole interiors, and the dynamics of extremal black holes.
Prior to joining MIT, Kehle was a junior fellow at ETH Zürich and a member at the Institute for Advanced Study in Princeton. He earned his bachelor’s and master’s degrees at Ludwig Maximilian University and Technical University of Munich, and his PhD in 2020 from the University of Cambridge. Kehle joined the Department of Mathematics as an assistant professor in July 2024.
Aleksandr Logunov is a mathematician specializing in harmonic analysis and geometric analysis. He has developed novel techniques for studying the zeros of solutions to partial differential equations and has resolved several long-standing problems, including Yau’s conjecture, Nadirashvili’s conjecture, and Landis’ conjectures.
Logunov earned his PhD in 2015 from St. Petersburg State University. He then spent two years as a postdoc at Tel Aviv University, followed by a year as a member of the Institute for Advanced Study in Princeton. In 2018, he joined Princeton University as an assistant professor. In 2020, he spent a semester at Tel Aviv University as an IAS Outstanding Fellow, and in 2021, he was appointed full professor at the University of Geneva. Logunov joined MIT as a full professor in the Department of Mathematics in January 2024.
Lyle Nelson is a sedimentary geologist studying the co-evolution of life and surface environments across pivotal transitions in Earth history, especially during significant ecological change — such as extinction events and the emergence of new clades — and during major shifts in ocean chemistry and climate. Studying sedimentary rocks that were tectonically uplifted and are now exposed in mountain belts around the world, Nelson’s group aims to answer questions such as how the reorganization of continents influenced the carbon cycle and climate, the causes and effects of ancient ice ages, and what factors drove the evolution of early life forms and the rapid diversification of animals during the Cambrian period.
Nelson earned a bachelor’s degree in earth and planetary sciences from Harvard University in 2015 and then worked as an exploration geologist before completing his PhD at Johns Hopkins University in 2022. Prior to coming to MIT, he was an assistant professor in the Department of Earth Sciences at Carleton University in Ontario, Canada. Nelson joined the EAPS faculty in 2024.
Protein evolution is the process by which proteins change over time through mechanisms such as mutation or natural selection. Biologist Sergey Ovchinnikov uses phylogenetic inference, protein structure prediction/determination, protein design, deep learning, energy-based models, and differentiable programming to tackle evolutionary questions at environmental, organismal, genomic, structural, and molecular scales, with the aim of developing a unified model of protein evolution.
Ovchinnikov received his BS in micro/molecular biology from Portland State University in 2010 and his PhD in molecular and cellular biology from the University of Washington in 2017. He was next a John Harvard Distinguished Science Fellow at Harvard University until 2023. Ovchinnikov joined MIT as an assistant professor of biology in January 2024.
Shu-Heng Shao explores the structural aspects of quantum field theories and lattice systems. Recently, his research has centered on generalized symmetries and anomalies, with a particular focus on a novel type of symmetry without an inverse, referred to as non-invertible symmetries. These new symmetries have been identified in various quantum systems, including the Ising model, Yang-Mills theories, lattice gauge theories, and the Standard Model. They lead to new constraints on renormalization group flows, new conservation laws, and new organizing principles in classifying phases of quantum matter.
Shao obtained his BS in physics from National Taiwan University in 2010, and his PhD in physics from Harvard University in 2016. He was then a five-year long-term member at the Institute for Advanced Study in Princeton before he moved to the Yang Institute for Theoretical Physics at Stony Brook University as an assistant professor in 2021. In 2024, he joined the MIT faculty as an assistant professor of physics.
MIT study shows how vision can be rebooted in adults with amblyopiaTemporarily anesthetizing the retina briefly reverts the activity of the visual system to that observed in early development and enables growth of responses to the amblyopic (“lazy”) eye.In the vision disorder amblyopia (commonly known as “lazy eye”), impaired vision in one eye during development causes neural connections in the brain’s visual system to shift toward supporting the other eye, leaving the amblyopic eye less capable even after the original impairment is corrected. Current interventions are only effective during infancy and early childhood, while the neural connections are still being formed.
Now a study in mice by neuroscientists in The Picower Institute for Learning and Memory at MIT shows that if the retina of the amblyopic eye is temporarily and reversibly anesthetized just for a couple of days, the brain’s visual response to the eye can be restored, even in adulthood.
The open-access findings, published Nov. 25 in Cell Reports, may improve the clinical potential of the idea of temporarily anesthetizing a retina to restore the strength of the amblyopic eye’s neural connections.
In 2021, the lab of Picower Professor Mark Bear and collaborators showed that anesthetizing the non-amblyopic eye could improve vision in the amblyopic one — an approach analogous in that way to the treatment used in childhood of patching the unimpaired eye. Those 2021 findings have now been replicated in adults of multiple species. But the new evidence on how inactivation works suggests that the proposed treatment also could be effective when applied directly to the amblyopic eye, Bear says, though a key next step will be to again show that it works in additional species and, ultimately, people.
“If it does, it’s a pretty substantial step forward, because it would be reassuring to know that vision in the good eye would not have to be interrupted by treatment,” says Bear, a faculty member in MIT’s Department of Brain and Cognitive Sciences. “The amblyopic eye, which is not doing much, could be inactivated and ‘brought back to life’ instead. Still, I think that especially with any invasive treatment, it’s extremely important to confirm the results in higher species with visual systems closer to our own.”
Madison Echavarri-Leet PhD ’25, whose doctoral thesis included this research, is the lead author of the study, which also demonstrates the underlying process in the brain that makes the potential treatment work.
A beneficial burst
Bear’s lab has been studying the science underlying amblyopia for decades, for instance by working to understand the molecular mechanisms that enable neural circuits to change their connections in response to visual experience or deprivation. The research has produced ideas about how to address amblyopia in adulthood. In a 2016 study with collaborators at Dalhousie University, they showed that temporarily anesthetizing both retinas could restore vision loss in amblyopia. Then, five years later, they published the study showing that anesthetizing just the non-amblyopic eye produced visual recovery for the amblyopic eye.
Throughout that time, the lab weighed multiple hypotheses to explain how retinal inactivation works its magic. Lingering in the lab’s archive of results, Bear says, was an unexplored finding in the lateral geniculate nucleus (LGN) that relays information from the eyes to the visual cortex, where vision is processed: back in 2008, they had found that blocking inputs from a retina to neurons in the LGN caused those neurons to fire synchronous “bursts” of electrical signals to downstream neurons in the visual cortex. Similar patterns of activity occur in the visual system before birth and guide early synaptic development.
The new study tested whether those bursts might have a role in the potential amblyopia treatments the lab was reporting. To get started, Leet and Bear’s team used a single injection of tetrodotoxin (TTX) to anesthetize retinas in the lab animals. They found that the bursting occurred not only in LGN neurons that received input from the anesthetized eye, but also in LGN neurons that received input from the unaffected eye.
From there, they showed that the bursting response depended on a particular “T-type” channel for calcium in the LGN neurons. This was important, because knowing this gave the scientists a way to turn it off. Once they gained that ability, then they could test whether doing so prevented TTX from having a therapeutic effect in mice with amblyopia.
Sure enough, when the researchers genetically knocked out the channels and disrupted the bursting, they found that anesthetizing the non-amblyopic eye could no longer help amblyopic mice. That showed the bursting is necessary for the treatment to work.
Aiding amblyopia
Given their finding that bursting occurs when either retina is anesthetized, the scientists hypothesized it might be enough to just do it in the amblyopic eye. To test this, they ran an experiment in which some mice modeling amblyopia received TTX in their amblyopic eye and some did not. The injection took the retina offline for two days. After a week, the scientists then measured activity in neurons in the visual cortex to calculate a ratio of input from each eye. They found that the ratio was much more even in mice that received the treatment versus those left untreated, indicating that after the amblyopic eye was anesthetized, its input in the brain rose to be at parity with input from the non-amblyopic one.
Further testing is needed, Bear notes, but the team wrote in the study that the results were encouraging.
“We are cautiously optimistic that these findings may lead to a new treatment approach for human amblyopia, particularly given the discovery that silencing the amblyopic eye is effective,” the scientists wrote.
In addition to Leet and Bear, the paper’s authors are Tushar Chauhan, Teresa Cramer, and Ming-fai Fong.
The National Institutes of Health, the Swiss National Science Foundation, the Severin Hacker Vision Research Fund, and the Freedom Together Foundation supported the study.
When it comes to language, context mattersMIT researchers identified three cognitive skills that we use to infer what someone really means.In everyday conversation, it’s critical to understand not just the words that are spoken, but the context in which they are said. If it’s pouring rain and someone remarks on the “lovely weather,” you won’t understand their meaning unless you realize that they’re being sarcastic.
Making inferences about what someone really means when it doesn’t match the literal meaning of their words is a skill known as pragmatic language ability. This includes not only interpreting sarcasm but also understanding metaphors and white lies, among many other conversational subtleties.
“Pragmatics is trying to reason about why somebody might say something, and what is the message they’re trying to convey given that they put it in this particular way,” says Evelina Fedorenko, an MIT associate professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research.
New research from Fedorenko and her colleagues has revealed that these abilities can be grouped together based on what types of inferences they require. In a study of 800 people, the researchers identified three clusters of pragmatic skills that are based on the same kinds of inferences and may have similar underlying neural processes.
One of these clusters includes inferences that are based on our knowledge of social conventions and rules. Another depends on knowledge of how the physical world works, while the last requires the ability to interpret differences in tone, which can indicate emphasis or emotion.
Fedorenko and Edward Gibson, an MIT professor of brain and cognitive sciences, are the senior authors of the study, which appears today in the Proceedings of the National Academy of Sciences. The paper’s lead authors are Sammy Floyd, a former MIT postdoc who is now an assistant professor of psychology at Sarah Lawrence College, and Olessia Jouravlev, a former MIT postdoc who is now an associate professor of cognitive science at Carleton University.
The importance of context
Much past research on how people understand language has focused on processing the literal meanings of words and how they fit together. To really understand what someone is saying, however, we need to interpret those meanings based on context.
“Language is about getting meanings across, and that often requires taking into account many different kinds of information — such as the social context, the visual context, or the present topic of the conversation,” Fedorenko says.
As one example, the phrase “people are leaving” can mean different things depending on the context, Gibson points out. If it’s late at night and someone asks you how a party is going, you may say “people are leaving,” to convey that the party is ending and everyone’s going home.
“However, if it’s early, and I say ‘people are leaving,’ then the implication is that the party isn’t very good,” Gibson says. “When you say a sentence, there’s a literal meaning to it, but how you interpret that literal meaning depends on the context.”
About 10 years ago, with support from the Simons Center for the Social Brain at MIT, Fedorenko and Gibson decided to explore whether it might be possible to precisely distinguish the types of processing that go into pragmatic language skills.
One way that neuroscientists can approach a question like this is to use functional magnetic resonance imaging (fMRI) to scan the brains of participants as they perform different tasks. This allows them to link brain activity in different locations to different functions. However, the tasks that the researchers designed for this study didn’t easily lend themselves to being performed in a scanner, so they took an alternative approach.
This approach, known as “individual differences,” involves studying a large number of people as they perform a variety of tasks. This technique allows researchers to determine whether the same underlying brain processes may be responsible for performance on different tasks.
To do this, the researchers evaluate whether each participant tends to perform similarly on certain groups of tasks. For example, some people might perform well on tasks that require an understanding of social conventions, such as interpreting indirect requests and irony. The same people might do only so-so on tasks that require understanding how the physical world works, and poorly on tasks that require distinguishing meanings based on changes in intonation — the melody of speech. This would suggest that separate brain processes are being recruited for each set of tasks.
The first phase of the study was led by Jouravlev, who assembled existing tasks that require pragmatic skills and created many more, for a total of 20. These included tasks that require people to understand humor and sarcasm, as well as tasks where changes in intonation can affect the meaning of a sentence. For example, someone who says “I wanted blue and black socks,” with emphasis on the word “black,” is implying that the black socks were forgotten.
“People really find ways to communicate creatively and indirectly and non-literally, and this battery of tasks captures that,” Floyd says.
Components of pragmatic ability
The researchers recruited study participants from an online crowdsourcing platform to perform the tasks, which took about eight hours to complete. From this first set of 400 participants, the researchers found that the tasks formed three clusters, related to social context, general knowledge of the world, and intonation. To test the robustness of the findings, the researchers continued the study with another set of 400 participants, with this second half run by Floyd after Jouravlev had left MIT.
With the second set of participants, the researchers found that tasks clustered into the same three groups. They also confirmed that differences in general intelligence, or in auditory processing ability (which is important for the processing of intonation), did not affect the outcomes that they observed.
In future work, the researchers hope to use brain imaging to explore whether the pragmatic components they identified are correlated with activity in different brain regions. Previous work has found that brain imaging often mirrors the distinctions identified in individual difference studies, but can also help link the relevant abilities to specific neural systems, such as the core language system or the theory of mind system.
This set of tests could also be used to study people with autism, who sometimes have difficulty understanding certain social cues. Such studies could determine more precisely the nature and extent of these difficulties. Another possibility could be studying people who were raised in different cultures, which may have different norms around speaking directly or indirectly.
“In Russian, which happens to be my native language, people are more direct. So perhaps there might be some differences in how native speakers of Russian process indirect requests compared to speakers of English,” Jouravlev says.
The research was funded by the Simons Center for the Social Brain at MIT, the National Institutes of Health, and the National Science Foundation.
Too sick to socialize: How the brain and immune system promote staying in bedMIT researchers discover how an immune system molecule triggers neurons to shut down social behavior in mice modeling infection.“I just can’t make it tonight. You have fun without me.” Across much of the animal kingdom, when infection strikes, social contact shuts down. A new study details how the immune and central nervous systems implement this sickness behavior.
It makes perfect sense that when we’re battling an infection, we lose our desire to be around others. That protects others from getting sick and lets us get much-needed rest. What hasn’t been as clear is how this behavior change happens.
In new research published Nov. 25 in Cell, scientists at MIT’s Picower Institute for Learning and Memory and collaborators used multiple methods to demonstrate causally that when the immune system cytokine interleukin-1 beta (IL-1β) reaches the IL-1 receptor 1 (IL-1R1) on neurons in a brain region called the dorsal raphe nucleus, that activates connections with the intermediate lateral septum to shut down social behavior.
“Our findings show that social isolation following immune challenge is self-imposed and driven by an active neural process, rather than a secondary consequence of physiological symptoms of sickness, such as lethargy,” says study co-senior author Gloria Choi, associate professor in MIT’s Department of Brain and Cognitive Sciences and a member of the Picower Institute.
Jun Huh, Harvard Medical School associate professor of immunology, is the paper’s co-senior author. The lead author is Liu Yang, a research scientist in Choi’s lab.
A molecule and its receptor
Choi and Huh’s long collaboration has identified other cytokines that affect social behavior by latching on to their receptors in the brain, so in this study their team hypothesized that the same kind of dynamic might cause social withdrawal during infection. But which cytokine? And what brain circuits might be affected?
To get started, Yang and her colleagues injected 21 different cytokines into the brains of mice, one by one, to see if any triggered social withdrawal the same way that giving mice LPS (a standard way of simulating infection) did. Only IL-1β injection fully recapitulated the same social withdrawal behavior as LPS. That said, IL-1β also made the mice more sluggish.
IL-1β affects cells when it hooks up with the IL-1R1, so the team next went looking across the brain for where the receptor is expressed. They identified several regions and examined individual neurons in each. The dorsal raphe nucleus (DRN) stood out among regions, both because it is known to modulate social behavior and because it is situated next to the cerebral aqueduct, which would give it plenty of exposure to incoming cytokines in cerebrospinal fluid. The experiments identified populations of DRN neurons that express IL-1R1, including many involved in making the crucial neuromodulatory chemical serotonin.
From there, Yang and the team demonstrated that IL-1β activates those neurons, and that activating the neurons promotes social withdrawal. Moreover, they showed that inhibiting that neural activity prevented social withdrawal in mice treated with IL-1β, and they showed that shutting down the IL-1R1 in the DRN neurons also prevented social withdrawal behavior after IL-1β injection or LPS exposure. Notably, these experiments did not change the lethargy that followed IL-1β or LPS, helping to demonstrate that social withdrawal and lethargy occur through different means.
“Our findings implicate IL-1β as a primary effector driving social withdrawal during systemic immune activation,” the researchers wrote in Cell.
Tracing the circuit
With the DRN identified as the site where neurons receiving IL-1β drove social withdrawal, the next question was what circuit they effected that behavior change through. The team traced where the neurons make their circuit projections and found several regions that have a known role in social behavior. Using optogenetics, a technology that engineers cells to become controllable with flashes of light, the scientists were able to activate the DRN neurons’ connections with each downstream region. Only activating the DRN’s connections with the intermediate lateral septum caused the social withdrawal behaviors seen with IL-1β injection or LPS exposure.
In a final test, they replicated their results by exposing some mice to salmonella.
“Collectively, these results reveal a role for IL-1R1-expressing DRN neurons in mediating social withdrawal in response to IL-1β during systemic immune challenge,” the researchers wrote.
Although the study revealed the cytokine, neurons, and circuit responsible for social withdrawal in mice in detail and with demonstrations of causality, the results still inspire new questions. One is whether IL-1R1 neurons affect other sickness behaviors. Another is whether serotonin has a role in social withdrawal or other sickness behaviors.
In addition to Yang, Choi, and Huh, the paper’s other authors are Matias Andina, Mario Witkowski, Hunter King, and Ian Wickersham.
Funding for the research came from the National Institute of Mental Health, the National Research Foundation of Korea, the Denis A. and Eugene W. Chinery Fund for Neurodevelopmental Research, the Jeongho Kim Neurodevelopmental Research Fund, Perry Ha, the Simons Center for the Social Brain, the Simons Foundation Autism Research Initiative, The Picower Institute for Learning and Memory, and The Freedom Together Foundation.
When it comes to brain function, neurons get a lot of the glory. But healthy brains depend on the cooperation of many kinds of cells. The most abundant of the brain’s non-neuronal cells are astrocytes, star-shaped cells with a lot of responsibilities. Astrocytes help shape neural circuits, participate in information processing, and provide nutrient and metabolic support to neurons. Individual cells can take on new roles throughout their lifetimes, and at any given time, the astrocytes in one part of the brain will look and behave differently than the astrocytes somewhere else.
After an extensive analysis by researchers at MIT, neuroscientists now have an atlas detailing astrocytes’ dynamic diversity. Its maps depict the regional specialization of astrocytes across the brains of both mice and marmosets — two powerful models for neuroscience research — and show how their populations shift as brains develop, mature, and age.
The open-access study, reported in the Nov. 20 issue of the journal Neuron, was led by Guoping Feng, the James W. (1963) and Patricia T. Poitras Professor of Brain and Cognitive Sciences at MIT. This work was supported by the Hock E. Tan and K. Lisa Yang Center for Autism Research, part of the Yang Tan Collective at MIT, and the National Institutes of Health’s BRAIN Initiative.
“It’s really important for us to pay attention to non-neuronal cells’ role in health and disease,” says Feng, who is also the associate director of the McGovern Institute for Brain Research and the director of the Hock E. Tan and K. Lisa Yang Center for Autism Research at MIT. And indeed, these cells — once seen as mere supporting players — have gained more of the spotlight in recent years. Astrocytes are known to play vital roles in the brain’s development and function, and their dysfunction seems to contribute to many psychiatric disorders and neurodegenerative diseases. “But compared to neurons, we know a lot less — especially during development,” Feng adds.
Probing the unknown
Feng and Margaret Schroeder, a former graduate student in his lab, thought it was important to understand astrocyte diversity across three axes: space, time, and species. They knew from earlier work in the lab, done in collaboration with Steve McCarroll’s lab at Harvard University and led by Fenna Krienen in his group, that in adult animals, different parts of the brain have distinctive sets of astrocytes.
“The natural question was, how early in development do we think this regional patterning of astrocytes starts?” Schroeder says.
To find out, she and her colleagues collected brain cells from mice and marmosets at six stages of life, spanning embryonic development to old age. For each animal, they sampled cells from four different brain regions: the prefrontal cortex, the motor cortex, the striatum, and the thalamus.
Then, working with Krienen, who is now an assistant professor at Princeton University, they analyzed the molecular contents of those cells, creating a profile of genetic activity for each one. That profile was based on the mRNA copies of genes found inside the cell, which are known collectively as the cell’s transcriptome. Determining which genes a cell is using, and how active those genes are, gives researchers insight into a cell’s function and is one way of defining its identity.
Dynamic diversity
After assessing the transcriptomes of about 1.4 million brain cells, the group focused in on the astrocytes, analyzing and comparing their patterns of gene expression. At every life stage, from before birth to old age, the team found regional specialization: astrocytes from different brain regions had similar patterns of gene expression, which were distinct from those of astrocytes in other brain regions.
This regional specialization was also apparent in the distinct shapes of astrocytes in different parts of the brain, which the team was able to see with expansion microscopy, a high-resolution imaging method developed by McGovern colleague Edward Boyden that reveals fine cellular features.
Notably, the astrocytes in each region changed as animals matured. “When we looked at our late embryonic time point, the astrocytes were already regionally patterned. But when we compare that to the adult profiles, they had completely shifted again,” Schroeder says. “So there’s something happening over postnatal development.” The most dramatic changes the team detected occurred between birth and early adolescence, a period during which brains rapidly rewire as animals begin to interact with the world and learn from their experiences.
Feng and Schroeder suspect that the changes they observed may be driven by the neural circuits that are sculpted and refined as the brain matures. “What we think they’re doing is kind of adapting to their local neuronal niche,” Schroeder says. “The types of genes that they are up-regulating and changing during development points to their interaction with neurons.” Feng adds that astrocytes may change their genetic programs in response to nearby neurons, or alternatively, they might help direct the development or function of local circuits as they adopt identities best suited to support particular neurons.
Both mouse and marmoset brains exhibited regional specialization of astrocytes and changes in those populations over time. But when the researchers looked at the specific genes whose activity defined various astrocyte populations, the data from the two species diverged. Schroeder calls this a note of caution for scientists who study astrocytes in animal models, and adds that the new atlas will help researchers assess the potential relevance of findings across species.
Beyond astrocytes
With a new understanding of astrocyte diversity, Feng says his team will pay close attention to how these cells are impacted by the disease-related genes they study and how those effects change during development. He also notes that the gene expression data in the atlas can be used to predict interactions between astrocytes and neurons. “This will really guide future experiments: how these cells’ interactions can shift with changes in the neurons or changes in the astrocytes,” he says.
The Feng lab is eager for other researchers to take advantage of the massive amounts of data they generated as they produced their atlas. Schroeder points out that the team analyzed the transcriptomes of all kinds of cells in the brain regions they studied, not just astrocytes. They are sharing their findings so researchers can use them to understand when and where specific genes are used in the brain, or dig in more deeply to further to explore the brain’s cellular diversity.
When companies “go green,” air quality impacts can vary dramaticallyCutting air travel and purchasing renewable energy can lead to different effects on overall air quality, even while achieving the same CO2 reduction, new research shows.Many organizations are taking actions to shrink their carbon footprint, such as purchasing electricity from renewable sources or reducing air travel.
Both actions would cut greenhouse gas emissions, but which offers greater societal benefits?
In a first step toward answering that question, MIT researchers found that even if each activity reduces the same amount of carbon dioxide emissions, the broader air quality impacts can be quite different.
They used a multifaceted modeling approach to quantify the air quality impacts of each activity, using data from three organizations. Their results indicate that air travel causes about three times more damage to air quality than comparable electricity purchases.
Exposure to major air pollutants, including ground-level ozone and fine particulate matter, can lead to cardiovascular and respiratory disease, and even premature death.
In addition, air quality impacts can vary dramatically across different regions. The study shows that air quality effects differ sharply across space because each decarbonization action influences pollution at a different scale. For example, for organizations in the northeast U.S., the air quality impacts of energy use affect the region, but the impacts of air travel are felt globally. This is because associated pollutants are emitted at higher altitudes.
Ultimately, the researchers hope this work highlights how organizations can prioritize climate actions to provide the greatest near-term benefits to people’s health.
“If we are trying to get to net zero emissions, that trajectory could have very different implications for a lot of other things we care about, like air quality and health impacts. Here we’ve shown that, for the same net zero goal, you can have even more societal benefits if you figure out a smart way to structure your reductions,” says Noelle Selin, a professor in the MIT Institute for Data, Systems, and Society (IDSS) and the Department of Earth, Atmospheric and Planetary Sciences (EAPS); director of the Center for Sustainability Science and Strategy; and senior author of the study.
Selin is joined on the paper by lead author Yuang (Albert) Chen, an MIT graduate student; Florian Allroggen, a research scientist in the MIT Department of Aeronautics and Astronautics; Sebastian D. Eastham, an associate professor in the Department of Aeronautics at Imperial College of London; Evan Gibney, an MIT graduate student; and William Clark, the Harvey Brooks Research Professor of International Science at Harvard University. The research was published Friday in Environmental Research Letters.
A quantification quandary
Climate scientists often focus on the air quality benefits of national or regional policies because the aggregate impacts are more straightforward to model.
Organizations’ efforts to “go green” are much harder to quantify because they exist within larger societal systems and are impacted by these national policies.
To tackle this challenging problem, the MIT researchers used data from two universities and one company in the greater Boston area. They studied whether organizational actions that remove the same amount of CO2 from the atmosphere would have an equivalent benefit on improving air quality.
“From a climate standpoint, CO2 has a global impact because it mixes through the atmosphere, no matter where it is emitted. But air quality impacts are driven by co-pollutants that act locally, so where those emissions occur really matters,” Chen says.
For instance, burning fossil fuels leads to emissions of nitrogen oxides and sulfur dioxide along with CO2. These co-pollutants react with chemicals in the atmosphere to form fine particulate matter and ground-level ozone, which is a primary component of smog.
Different fossil fuels cause varying amounts of co-pollutant emissions. In addition, local factors like weather and existing emissions affect the formation of smog and fine particulate matter. The impacts of these pollutants also depend on the local population distribution and overall health.
“You can’t just assume that all CO2-reduction strategies will have equivalent near-term impacts on sustainability. You have to consider all the other emissions that go along with that CO2,” Selin says.
The researchers used a systems-level approach that involved connecting multiple models. They fed the organizational energy consumption and flight data into this systems-level model to examine local and regional air quality impacts.
Their approach incorporated many interconnected elements, such as power plant emissions data, statistical linkages between air quality and mortality outcomes, and aviation emissions associated with specific flight routes. They fed those data into an atmospheric chemistry transport model to calculate air quality and climate impacts for each activity.
The sheer breadth of the system created many challenges.
“We had to do multiple sensitivity analyses to make sure the overall pipeline was working,” Chen says.
Analyzing air quality
At the end, the researchers monetized air quality impacts to compare them with the climate impacts in a consistent way. Monetized climate impacts of CO2 emissions based on prior literature are about $170 per ton (expressed in 2015 dollars), representing the financial cost of damages caused by climate change.
Using the same method as used to monetize the impact of CO2, the researchers calculated that air quality damages associated with electricity purchases are an additional $88 per ton of CO2, while the damages from air travel are an additional $265 per ton.
This highlights how the air quality impacts of a ton of emitted CO2 depend strongly on where and how the emissions are produced.
“A real surprise was how much aviation impacted places that were really far from these organizations. Not only were flights more damaging, but the pattern of damage, in terms of who is harmed by air pollution from that activity, is very different than who is harmed by energy systems,” Selin says.
Most airplane emissions occur at high altitudes, where differences in atmospheric chemistry and transport can amplify their air quality impacts. These emissions are also carried across continents by atmospheric winds, affecting people thousands of miles from their source.
Nations like India and China face outsized air quality impacts from such emissions due to the higher level of existing ground-level emissions, which exacerbates the formation of fine particulate matter and smog.
The researchers also conducted a deeper analysis of short-haul flights. Their results showed that regional flights have a relatively larger impact on local air quality than longer domestic flights.
“If an organization is thinking about how to benefit the neighborhoods in their backyard, then reducing short-haul flights could be a strategy with real benefits,” Selin says.
Even in electricity purchases, the researchers found that location matters.
For instance, fine particulate matter emissions from power plants caused by one university are in a densely populated region, while emissions caused by the corporation fall over less populated areas.
Due to these population differences, the university’s emissions resulted in 16 percent more estimated premature deaths than those of the corporation, even though the climate impacts are identical.
“These results show that, if organizations want to achieve net zero emissions while promoting sustainability, which unit of CO2 gets removed first really matters a lot,” Chen says.
In the future, the researchers want to quantify the air quality and climate impacts of train travel, to see whether replacing short-haul flights with train trips could provide benefits.
They also want to explore the air quality impacts of other energy sources in the U.S., such as data centers.
This research was funded, in part, by Biogen, Inc., the Italian Ministry for Environment, Land, and Sea, and the MIT Center for Sustainability Science and Strategy.
Alternate proteins from the same gene contribute differently to health and rare diseaseNew findings may help researchers identify genetic mutations that contribute to rare diseases, by studying when and how single genes produce multiple versions of proteins.Around 25 million Americans have rare genetic diseases, and many of them struggle with not only a lack of effective treatments, but also a lack of good information about their disease. Clinicians may not know what causes a patient’s symptoms, know how their disease will progress, or even have a clear diagnosis. Researchers have looked to the human genome for answers, and many disease-causing genetic mutations have been identified, but as many as 70 percent of patients still lack a clear genetic explanation.
In a paper published in Molecular Cell on Nov. 7, Whitehead Institute for Biomedical Research member Iain Cheeseman, graduate student Jimmy Ly, and colleagues propose that researchers and clinicians may be able to get more information from patients’ genomes by looking at them in a different way.
The common wisdom is that each gene codes for one protein. Someone studying whether a patient has a mutation or version of a gene that contributes to their disease will therefore look for mutations that affect the “known” protein product of that gene. However, Cheeseman and others are finding that the majority of genes code for more than one protein. That means that a mutation that might seem insignificant because it does not appear to affect the known protein could nonetheless alter a different protein made by the same gene. Now, Cheeseman and Ly have shown that mutations affecting one or multiple proteins from the same gene can contribute differently to disease.
In their paper, the researchers first share what they have learned about how cells make use of the ability to generate different versions of proteins from the same gene. Then, they examine how mutations that affect these proteins contribute to disease. Through a collaboration with co-author Mark Fleming, the pathologist-in-chief at Boston Children’s Hospital, they provide two case studies of patients with atypical presentations of a rare anemia linked to mutations that selectively affect only one of two proteins produced by the gene implicated in the disease.
“We hope this work demonstrates the importance of considering whether a gene of interest makes multiple versions of a protein, and what the role of each version is in health and disease,” Ly says. “This information could lead to better understanding of the biology of disease, better diagnostics, and perhaps one day to tailored therapies to treat these diseases.”
Cells have several ways to make different versions of a protein, but the variation that Cheeseman and Ly study happens during protein production from genetic code. Cellular machines build each protein according to the instructions within a genetic sequence that begins at a “start codon” and ends at a “stop codon.” However, some genetic sequences contain more than one start codon, many of them hiding in plain sight. If the cellular machinery skips the first start codon and detects a second one, it may build a shorter version of the protein. In other cases, the machinery may detect a section that closely resembles a start codon at a point earlier in the sequence than its typical starting place, and build a longer version of the protein.
These events may sound like mistakes: the cell’s machinery accidentally creating the wrong version of the correct protein. To the contrary, protein production from these alternate starting places is an important feature of cell biology that exists across species. When Ly traced when certain genes evolved to produce multiple proteins, he found that this is a common, robust process that has been preserved throughout evolutionary history for millions of years.
Ly shows that one function this serves is to send versions of a protein to different parts of the cell. Many proteins contain ZIP code-like sequences that tell the cell’s machinery where to deliver them so the proteins can do their jobs. Ly found many examples in which longer and shorter versions of the same protein contained different ZIP codes and ended up in different places within the cell.
In particular, Ly found many cases in which one version of a protein ended up in mitochondria, structures that provide energy to cells, while another version ended up elsewhere. Because of the mitochondria’s role in the essential process of energy production, mutations to mitochondrial genes are often implicated in disease.
Ly wondered what would happen when a disease-causing mutation eliminates one version of a protein but leaves the other intact, causing the protein to only reach one of its two intended destinations. He looked through a database containing genetic information from people with rare diseases to see if such cases existed, and found that they did. In fact, there may be tens of thousands of such cases. However, without access to the people, Ly had no way of knowing what the consequences of this were in terms of symptoms and severity of disease.
Meanwhile, Cheeseman, who is also a professor of biology at MIT, had begun working with Boston Children’s Hospital to foster collaborations between Whitehead Institute and the hospital’s researchers and clinicians to accelerate the pathway from research discovery to clinical application. Through these efforts, Cheeseman and Ly met Fleming.
One group of Fleming’s patients have a type of anemia called SIFD — sideroblastic anemia with B-cell immunodeficiency, periodic fevers, and developmental delay — that is caused by mutations to the TRNT1 gene. TRNT1 is one of the genes Ly had identified as producing a mitochondrial version of its protein and another version that ends up elsewhere: in the nucleus.
Fleming shared anonymized patient data with Ly, and Ly found two cases of interest in the genetic data. Most of the patients had mutations that impaired both versions of the protein, but one patient had a mutation that eliminated only the mitochondrial version of the protein, while another patient had a mutation that eliminated only the nuclear version.
When Ly shared his results, Fleming revealed that both of those patients had very atypical presentations of SIFD, supporting Ly’s hypothesis that mutations affecting different versions of a protein would have different consequences. The patient who only had the mitochondrial version was anemic, but developmentally normal. The patient missing the mitochondrial version of the protein did not have developmental delays or chronic anemia, but did have other immune symptoms, and was not correctly diagnosed until his 50s. There are likely other factors contributing to each patient’s exact presentation of the disease, but Ly’s work begins to unravel the mystery of their atypical symptoms.
Cheeseman and Ly want to make more clinicians aware of the prevalence of genes coding for more than one protein, so they know to check for mutations affecting any of the protein versions that could contribute to disease. For example, several TRNT1 mutations that only eliminate the shorter version of the protein are not flagged as disease-causing by current assessment tools. Cheeseman lab researchers, including Ly and graduate student Matteo Di Bernardo, are now developing a new assessment tool for clinicians, called SwissIsoform, that will identify relevant mutations that affect specific protein versions, including mutations that would otherwise be missed.
“Jimmy and Iain’s work will globally support genetic disease variant interpretation and help with connecting genetic differences to variation in disease symptoms,” Fleming says. “In fact, we have recently identified two other patients with mutations affecting only the mitochondrial versions of two other proteins, who similarly have milder symptoms than patients with mutations that affect both versions.”
Long term, the researchers hope that their discoveries could aid in understanding the molecular basis of disease and in developing new gene therapies: Once researchers understand what has gone wrong within a cell to cause disease, they are better equipped to devise a solution. More immediately, the researchers hope that their work will make a difference by providing better information to clinicians and people with rare diseases.
“As a basic researcher who doesn’t typically interact with patients, there’s something very satisfying about knowing that the work you are doing is helping specific people,” Cheeseman says. “As my lab transitions to this new focus, I’ve heard many stories from people trying to navigate a rare disease and just get answers, and that has been really motivating to us, as we work to provide new insights into the disease biology.”
With every step we take, our brains are already thinking about the next one. If a bump in the terrain or a minor misstep has thrown us off balance, our stride may need to be altered to prevent a fall. Our two-legged posture makes maintaining stability particularly complex, which our brains solve in part by continually monitoring our bodies and adjusting where we place our feet.
Now, scientists at MIT have determined that animals with very different bodies likely use a shared strategy to balance themselves when they walk.
Nidhi Seethapathi, the Frederick A. and Carole J. Middleton Career Development Assistant Professor in Brain and Cognitive Sciences and Electrical Engineering and Computer Science at MIT, and K. Lisa Yang ICoN Center Fellow Antoine De Comite found that humans, mice, and fruit flies all use an error-correction process to guide foot placement and maintain stability while walking. Their findings, published Oct. 21 in the journal PNAS, could inform future studies exploring how the brain achieves stability during locomotion — bridging the gap between animal models and human balance.
Corrective action
Information must be integrated by the brain to keep us upright when we walk or run. Our steps must be continually adjusted according to the terrain, our desired speed, and our body’s current velocity and position in space.
“We rely on a combination of vestibular, proprioceptive, and visual information to build an estimate of our body’s state, determining if we are about to fall. Once we know the body’s state, we can decide which corrective actions to take,” explains Seethapathi, who is also an associate investigator at the McGovern Institute for Brain Research.
While humans are known to adjust where they place their feet to correct for errors, it is not known whether animals whose bodies are more stable do this, too.
To find out, Seethapathi and De Comite, who is a postdoc in Seethapathi’s and Guoping Feng's lab at the McGovern Institute, turned to locomotion data from mice, fruit flies, and humans shared by other labs, enabling an analysis across species that is otherwise challenging. Importantly, Seethapathi notes, all the animals they studied were walking in everyday natural environments, such as around a room — not on a treadmill or over unusual terrain.
Even in these ordinary circumstances, missteps and minor imbalances are common, and the team’s analysis showed that these errors predicted where all of the animals placed their feet in subsequent steps, regardless of whether they had two, four, or six legs.
One foot in front of another
By tracking the animals’ bodies and the step-by-step placement of their feet, Seethapathi and De Comite were able to find a measure of error that informs each animal’s next step. “By taking this comparative approach, we’ve forced ourselves to come up with a definition of error that generalizes across species,” Seethapathi says. “An animal moves with an expected body state for a particular speed. If it deviates from that ideal state, that deviation — at any given moment — is the error.”
“It was surprising to find similarities across these three species, which, at first sight, look very different,” says DeComite. “The methods themselves are surprising because we now have a pipeline to analyze foot placement and locomotion stability in any legged species,” explains DeComite, “which could lead similar analyses in even more species in the future.”
The team’s data suggest that in all of the species in the study, placement of the feet is guided both by an error-correction process and the speed at which an animal is traveling. Steps tend to lengthen and feet spend less time on the ground as animals pick up their pace, while the width of each step seems to change largely to compensate for body-state errors.
Now, Seethapathi says, we can look forward to future studies to explore how the dual control systems might be generated and integrated in the brain to keep moving bodies stable.
Studying how brains help animals move stably may also guide the development of more-targeted strategies to help people improve their balance and, ultimately, prevent falls.
“In elderly individuals and individuals with sensorimotor disorders, minimizing fall risk is one of the major functional targets of rehabilitation,” says Seethapathi. “A fundamental understanding of the error correction process that helps us remain stable will provide insight into why this process falls short in populations with neural deficits,” she says.
MIT chemists synthesize a fungal compound that holds promise for treating brain cancerPreliminary studies find derivatives of the compound, known as verticillin A, can kill some types of glioma cells.For the first time, MIT chemists have synthesized a fungal compound known as verticillin A, which was discovered more than 50 years ago and has shown potential as an anticancer agent.
The compound has a complex structure that made it more difficult to synthesize than related compounds, even though it differed by only a couple of atoms.
“We have a much better appreciation for how those subtle structural changes can significantly increase the synthetic challenge,” says Mohammad Movassaghi, an MIT professor of chemistry. “Now we have the technology where we can not only access them for the first time, more than 50 years after they were isolated, but also we can make many designed variants, which can enable further detailed studies.”
In tests in human cancer cells, a derivative of verticillin A showed particular promise against a type of pediatric brain cancer called diffuse midline glioma. More tests will be needed to evaluate its potential for clinical use, the researchers say.
Movassaghi and Jun Qi, an associate professor of medicine at Dana-Farber Cancer Institute/Boston Children’s Cancer and Blood Disorders Center and Harvard Medical School, are the senior authors of the study, which appears today in the Journal of the American Chemical Society. Walker Knauss PhD ’24 is the lead author of the paper. Xiuqi Wang, a medicinal chemist and chemical biologist at Dana-Farber, and Mariella Filbin, research director in the Pediatric Neurology-Oncology Program at Dana-Farber/Boston Children’s Cancer and Blood Disorders Center, are also authors of the study.
A complex synthesis
Researchers first reported the isolation of verticillin A from fungi, which use it for protection against pathogens, in 1970. Verticillin A and related fungal compounds have drawn interest for their potential anticancer and antimicrobial activity, but their complexity has made them difficult to synthesize.
In 2009, Movassaghi’s lab reported the synthesis of (+)-11,11'-dideoxyverticillin A, a fungal compound similar to verticillin A. That molecule has 10 rings and eight stereogenic centers, or carbon atoms that have four different chemical groups attached to them. These groups have to be attached in a way that ensures they have the correct orientation, or stereochemistry, with respect to the rest of the molecule.
Once that synthesis was achieved, however, synthesis of verticillin A remained challenging, even though the only difference between verticillin A and (+)-11,11'-dideoxyverticillin A is the presence of two oxygen atoms.
“Those two oxygens greatly limit the window of opportunity that you have in terms of doing chemical transformations,” Movassaghi says. “It makes the compound so much more fragile, so much more sensitive, so that even though we had had years of methodological advances, the compound continued to pose a challenge for us.”
Both of the verticillin A compounds consist of two identical fragments that must be joined together to form a molecule called a dimer. To create (+)-11,11'-dideoxyverticillin A, the researchers had performed the dimerization reaction near the end of the synthesis, then added four critical carbon-sulfur bonds.
Yet when trying to synthesize verticillin A, the researchers found that waiting to add those carbon-sulfur bonds at the end did not result in the correct stereochemistry. As a result, the researchers had to rethink their approach and ended up creating a very different synthetic sequence.
“What we learned was the timing of the events is absolutely critical. We had to significantly change the order of the bond-forming events,” Movassaghi says.
The verticillin A synthesis begins with an amino acid derivative known as beta-hydroxytryptophan, and then step-by-step, the researchers add a variety of chemical functional groups, including alcohols, ketones, and amides, in a way that ensures the correct stereochemistry.
A functional group containing two carbon-sulfur bonds and a disulfide bond were introduced early on, to help control the stereochemistry of the molecule, but the sensitive disulfides had to be “masked” and protected as a pair of sulfides to prevent them from breakdown under subsequent chemical reactions. The disulfide-containing groups were then regenerated after the dimerization reaction.
“This particular dimerization really stands out in terms of the complexity of the substrates that we’re bringing together, which have such a dense array of functional groups and stereochemistry,” Movassaghi says.
The overall synthesis requires 16 steps from the beta-hydroxytryptophan starting material to verticillin A.
Killing cancer cells
Once the researchers had successfully completed the synthesis, they were also able to tweak it to generate derivates of verticillin A. Researchers at Dana-Farber then tested these compounds against several types of diffuse midline glioma (DMG), a rare brain tumor that has few treatment options.
The researchers found that the DMG cell lines most susceptible to these compounds were those that have high levels of a protein called EZHIP. This protein, which plays a role in the methylation of DNA, has been previously identified as a potential drug target for DMG.
“Identifying the potential targets of these compounds will play a critical role in further understanding their mechanism of action, and more importantly, will help optimize the compounds from the Movassaghi lab to be more target specific for novel therapy development,” Qi says.
The verticillin derivatives appear to interact with EZHIP in a way that increases DNA methylation, which induces the cancer cells to undergo programmed cell death. The compounds that were most successful at killing these cells were N-sulfonylated (+)-11,11'-dideoxyverticillin A and N-sulfonylated verticillin A. N-sulfonylation — the addition of a functional group containing sulfur and oxygen — makes the molecules more stable.
“The natural product itself is not the most potent, but it’s the natural product synthesis that brought us to a point where we can make these derivatives and study them,” Movassaghi says.
The Dana-Farber team is now working on further validating the mechanism of action of the verticillin derivatives, and they also hope to begin testing the compounds in animal models of pediatric brain cancers.
“Natural compounds have been valuable resources for drug discovery, and we will fully evaluate the therapeutic potential of these molecules by integrating our expertise in chemistry, chemical biology, cancer biology, and patient care. We have also profiled our lead molecules in more than 800 cancer cell lines, and will be able to understand their functions more broadly in other cancers,” Qi says.
The research was funded by the National Institute of General Medical Sciences, the Ependymoma Research Foundation, and the Curing Kids Cancer Foundation.
Inaugural UROP mixer draws hundreds of students eager to gain research experienceThe Institute will commit up to $1 million in new funding to increase supply of UROPs.More than 600 undergraduate students crowded into the Stratton Student Center on Oct. 28, for MIT’s first-ever Institute-wide Undergraduate Research Opportunities Program (UROP) mixer.
“At MIT, we believe in the transformative power of learning by doing, and there’s no better example than UROP,” says MIT President Sally Kornbluth, who attended the mixer with Provost Anantha Chandrakasan and Chancellor Melissa Nobles. “The energy at the inaugural UROP mixer was exhilarating, and I’m delighted that students now have this easy way to explore different paths to the frontiers of research.”
The event gave students the chance to explore internships and undergraduate research opportunities — in fields ranging from artificial intelligence to the life sciences to the arts, and beyond — all in one place, with approximately 150 researchers from labs available to discuss the projects and answer questions in real time. The offices of the Chancellor and Provost co-hosted the event, which the UROP office helped coordinate.
First-year student Isabell Luo recently began a UROP project in the Living Matter lab led by Professor Rafael Gómez-Bombarelli, where she is benchmarking machine-learned interatomic potentials that simulate chemical reactions at the molecular level and exploring fine-tuning strategies to improve their accuracy. She’s passionate about AI and machine learning, eco-friendly design, and entrepreneurship, and was attending the UROP mixer to find more “real-world” projects to work on.
“I’m trying to dip my toes into different areas, which is why I’m at the mixer,” said Luo. “On the internet it would be so hard to find the right opportunities. It’s nice to have a physical space and speak to people from so many disciplines.”
More than nine out of every 10 members of MIT’s class of 2025 took part in a UROP before graduating. In recent years, approximately 3,200 undergraduates have participated in a UROP project each year. To meet the strong demand for UROPs, the Institute will commit up to $1 million in funding this year to create more of them. The funding will come from MIT’s schools and Office of the Provost.
“UROPs have become an indispensable part of the MIT undergraduate education, providing hands-on experience that really helps students learn new ways to problem-solve and innovate,” says Chandrakasan. “I was thrilled to see so many students at the mixer — it was a testament to their willingness to roll up their sleeves and get to work on really tough challenges.”
Arielle Berman, a postdoc in the Raman Lab, was looking to recruit an undergraduate researcher for a project on sensor integration for muscle actuators for biohybrid robots — robots that include living parts. She spoke about how her own research experience as an undergraduate had shaped her career.
“It’s a really important event because we’re able to expose undergraduates to research,” says Berman. “I’m the first PhD in my family, so I wasn’t aware that research existed, or could be a career. Working in a research lab as an undergraduate student changed my life trajectory, and I’m happy to pass it forward and help students have experiences they wouldn’t have otherwise.”
The event drew students with interests as varied as the projects available. For first-year Nate Black, who plans to major in mechanical engineering, “I just wanted something to develop my interest in 3D printing and additive manufacturing.” First-year Akpandu Ekezie, who expects to major in Course 6-5 (Electrical Engineering with Computing), was interested in photonic circuits. “I’m looking mainly for EE-related things that are more hands-on,” he explained. “I want to get more physical experience.”
Nobles has a message for students considering a UROP project: Just go for it. “There’s a UROP for every student, regardless of experience,” she says. “Find something that excites you and give it a try.” She encourages students who weren’t able to attend the mixer, as well as those who did attend but still have questions, to get in touch with the UROP office.
First-year students Ruby Mykkanen and Aditi Deshpande attended the mixer together. Both were searching for UROP projects they could work on during Independent Activities Period in January. Deshpande also noted that the mixer was helpful for understanding “what research is being done at MIT.”
Said Mykkanen, “It’s fun to have it all in one place!”
Scientists get a first look at the innermost region of a white dwarf systemX-ray observations reveal surprising features of the dying star’s most energetic environment.Some 200 light years from Earth, the core of a dead star is circling a larger star in a macabre cosmic dance. The dead star is a type of white dwarf that exerts a powerful magnetic field as it pulls material from the larger star into a swirling, accreting disk. The spiraling pair is what’s known as an “intermediate polar” — a type of star system that gives off a complex pattern of intense radiation, including X-rays, as gas from the larger star falls onto the other one.
Now, MIT astronomers have used an X-ray telescope in space to identify key features in the system’s innermost region — an extremely energetic environment that has been inaccessible to most telescopes until now. In an open-access study published in the Astrophysical Journal, the team reports using NASA’s Imaging X-ray Polarimetry Explorer (IXPE) to observe the intermediate polar, known as EX Hydrae.
The team found a surprisingly high degree of X-ray polarization, which describes the direction of an X-ray wave’s electric field, as well as an unexpected direction of polarization in the X-rays coming from EX Hydrae. From these measurements, the researchers traced the X-rays back to their source in the system’s innermost region, close to the surface of the white dwarf.
What’s more, they determined that the system’s X-rays were emitted from a column of white-hot material that the white dwarf was pulling in from its companion star. They estimate that this column is about 2,000 miles high — about half the radius of the white dwarf itself and much taller than what physicists had predicted for such a system. They also determined that the X-rays are reflected off the white dwarf’s surface before scattering into space — an effect that physicists suspected but hadn’t confirmed until now.
The team’s results demonstrate that X-ray polarimetry can be an effective way to study extreme stellar environments such as the most energetic regions of an accreting white dwarf.
“We showed that X-ray polarimetry can be used to make detailed measurements of the white dwarf's accretion geometry,” says Sean Gunderson, a postdoc in MIT’s Kavli Institute for Astrophysics and Space Research, who is the study’s lead author. “It opens the window into the possibility of making similar measurements of other types of accreting white dwarfs that also have never had predicted X-ray polarization signals.”
Gunderson’s MIT Kavli co-authors include graduate student Swati Ravi and research scientists Herman Marshall and David Huenemoerder, along with Dustin Swarm of the University of Iowa, Richard Ignace of East Tennessee State University, Yael Nazé of the University of Liège, and Pragati Pradhan of Embry Riddle Aeronautical University.
A high-energy fountain
All forms of light, including X-rays, are influenced by electric and magnetic fields. Light travels in waves that wiggle, or oscillate, at right angles to the direction in which the light is traveling. External electric and magnetic fields can pull these oscillations in random directions. But when light interacts and bounces off a surface, it can become polarized, meaning that its vibrations tighten up in one direction. Polarized light, then, can be a way for scientists to trace the source of the light and discern some details about the source’s geometry.
The IXPE space observatory is NASA’s first mission designed to study polarized X-rays that are emitted by extreme astrophysical objects. The spacecraft, which launched in 2021, orbits the Earth and records these polarized X-rays. Since launch, it has primarily focused on supernovae, black holes, and neutron stars.
The new MIT study is the first to use IXPE to measure polarized X-rays from an intermediate polar — a smaller system compared to black holes and supernovas, that nevertheless is known to be a strong emitter of X-rays.
“We started talking about how much polarization would be useful to get an idea of what’s happening in these types of systems, which most telescopes see as just a dot in their field of view,” Marshall says.
An intermediate polar gets its name from the strength of the central white dwarf’s magnetic field. When this field is strong, the material from the companion star is directly pulled toward the white dwarf’s magnetic poles. When the field is very weak, the stellar material instead swirls around the dwarf in an accretion disk that eventually deposits matter directly onto the dwarf’s surface.
In the case of an intermediate polar, physicists predict that material should fall in a complex sort of in-between pattern, forming an accretion disk that also gets pulled toward the white dwarf’s poles. The magnetic field should lift the disk of incoming material far upward, like a high-energy fountain, before the stellar debris falls toward the white dwarf’s magnetic poles, at speeds of millions of miles per hour, in what astronomers refer to as an “accretion curtain.” Physicists suspect that this falling material should run up against previously lifted material that is still falling toward the poles, creating a sort of traffic jam of gas. This pile-up of matter forms a column of colliding gas that is tens of millions of degrees Fahrenheit and should emit high-energy X-rays.
An innermost picture
By measuring any polarized X-rays emitted by EX Hydrae, the team aimed to test the picture of intermediate polars that physicists had hypothesized. In January 2025, IXPE took a total of about 600,000 seconds, or about seven days’ worth, of X-ray measurements from the system.
“With every X-ray that comes in from the source, you can measure the polarization direction,” Marshall explains. “You collect a lot of these, and they’re all at different angles and directions which you can average to get a preferred degree and direction of the polarization.”
Their measurements revealed an 8 percent polarization degree that was much higher than what scientists had predicted according to some theoretical models. From there, the researchers were able to confirm that the X-rays were indeed coming from the system’s column, and that this column is about 2,000 miles high.
“If you were able to stand somewhat close to the white dwarf’s pole, you would see a column of gas stretching 2,000 miles into the sky, and then fanning outward,” Gunderson says.
The team also measured the direction of EX Hydrae’s X-ray polarization, which they determined to be perpendicular to the white dwarf’s column of incoming gas. This was a sign that the X-rays emitted by the column were then bouncing off the white dwarf’s surface before traveling into space, and eventually into IXPE’s telescopes.
“The thing that’s helpful about X-ray polarization is that it’s giving you a picture of the innermost, most energetic portion of this entire system,” Ravi says. “When we look through other telescopes, we don’t see any of this detail.”
The team plans to apply X-ray polarization to study other accreting white dwarf systems, which could help scientists get a grasp on much larger cosmic phenomena.
“There comes a point where so much material is falling onto the white dwarf from a companion star that the white dwarf can’t hold it anymore, the whole thing collapses and produces a type of supernova that’s observable throughout the universe, which can be used to figure out the size of the universe,” Marshall offers. “So understanding these white dwarf systems helps scientists understand the sources of those supernovae, and tells you about the ecology of the galaxy.”
This research was supported, in part, by NASA.
The cost of thinkingMIT neuroscientists find a surprising parallel in the ways humans and new AI models solve complex problems.Large language models (LLMs) like ChatGPT can write an essay or plan a menu almost instantly. But until recently, it was also easy to stump them. The models, which rely on language patterns to respond to users’ queries, often failed at math problems and were not good at complex reasoning. Suddenly, however, they’ve gotten a lot better at these things.
A new generation of LLMs known as reasoning models are being trained to solve complex problems. Like humans, they need some time to think through problems like these — and remarkably, scientists at MIT’s McGovern Institute for Brain Research have found that the kinds of problems that require the most processing from reasoning models are the very same problems that people need take their time with. In other words, they report today in the journal PNAS, the “cost of thinking” for a reasoning model is similar to the cost of thinking for a human.
The researchers, who were led by Evelina Fedorenko, an associate professor of brain and cognitive sciences and an investigator at the McGovern Institute, conclude that in at least one important way, reasoning models have a human-like approach to thinking. That, they note, is not by design. “People who build these models don’t care if they do it like humans. They just want a system that will robustly perform under all sorts of conditions and produce correct responses,” Fedorenko says. “The fact that there’s some convergence is really quite striking.”
Reasoning models
Like many forms of artificial intelligence, the new reasoning models are artificial neural networks: computational tools that learn how to process information when they are given data and a problem to solve. Artificial neural networks have been very successful at many of the tasks that the brain’s own neural networks do well — and in some cases, neuroscientists have discovered that those that perform best do share certain aspects of information processing in the brain. Still, some scientists argued that artificial intelligence was not ready to take on more sophisticated aspects of human intelligence.
“Up until recently, I was among the people saying, ‘These models are really good at things like perception and language, but it’s still going to be a long ways off until we have neural network models that can do reasoning,” Fedorenko says. “Then these large reasoning models emerged and they seem to do much better at a lot of these thinking tasks, like solving math problems and writing pieces of computer code.”
Andrea Gregor de Varda, a K. Lisa Yang ICoN Center Fellow and a postdoc in Fedorenko’s lab, explains that reasoning models work out problems step by step. “At some point, people realized that models needed to have more space to perform the actual computations that are needed to solve complex problems,” he says. “The performance started becoming way, way stronger if you let the models break down the problems into parts.”
To encourage models to work through complex problems in steps that lead to correct solutions, engineers can use reinforcement learning. During their training, the models are rewarded for correct answers and penalized for wrong ones. “The models explore the problem space themselves,” de Varda says. “The actions that lead to positive rewards are reinforced, so that they produce correct solutions more often.”
Models trained in this way are much more likely than their predecessors to arrive at the same answers a human would when they are given a reasoning task. Their stepwise problem-solving does mean reasoning models can take a bit longer to find an answer than the LLMs that came before — but since they’re getting right answers where the previous models would have failed, their responses are worth the wait.
The models’ need to take some time to work through complex problems already hints at a parallel to human thinking: if you demand that a person solve a hard problem instantaneously, they’d probably fail, too. De Varda wanted to examine this relationship more systematically. So he gave reasoning models and human volunteers the same set of problems, and tracked not just whether they got the answers right, but also how much time or effort it took them to get there.
Time versus tokens
This meant measuring how long it took people to respond to each question, down to the millisecond. For the models, Varda used a different metric. It didn’t make sense to measure processing time, since this is more dependent on computer hardware than the effort the model puts into solving a problem. So instead, he tracked tokens, which are part of a model’s internal chain of thought. “They produce tokens that are not meant for the user to see and work on, but just to have some track of the internal computation that they’re doing,” de Varda explains. “It’s as if they were talking to themselves.”
Both humans and reasoning models were asked to solve seven different types of problems, like numeric arithmetic and intuitive reasoning. For each problem class, they were given many problems. The harder a given problem was, the longer it took people to solve it — and the longer it took people to solve a problem, the more tokens a reasoning model generated as it came to its own solution.
Likewise, the classes of problems that humans took longest to solve were the same classes of problems that required the most tokens for the models: arithmetic problems were the least demanding, whereas a group of problems called the “ARC challenge,” where pairs of colored grids represent a transformation that must be inferred and then applied to a new object, were the most costly for both people and models.
De Varda and Fedorenko say the striking match in the costs of thinking demonstrates one way in which reasoning models are thinking like humans. That doesn’t mean the models are recreating human intelligence, though. The researchers still want to know whether the models use similar representations of information to the human brain, and how those representations are transformed into solutions to problems. They’re also curious whether the models will be able to handle problems that require world knowledge that is not spelled out in the texts that are used for model training.
The researchers point out that even though reasoning models generate internal monologues as they solve problems, they are not necessarily using language to think. “If you look at the output that these models produce while reasoning, it often contains errors or some nonsensical bits, even if the model ultimately arrives at a correct answer. So the actual internal computations likely take place in an abstract, non-linguistic representation space, similar to how humans don’t use language to think,” he says.
Symposium examines the neural circuits that keep us alive and wellSeven speakers from around the country convened at MIT to describe some of the latest research on the neural mechanisms that we need to survive.Taking an audience of hundreds on a tour around the body, seven speakers at The Picower Institute for Learning and Memory’s symposium “Circuits of Survival and Homeostasis” Oct. 21 shared their advanced and novel research about some of the nervous system’s most evolutionarily ancient functions.
Introducing the symposium that she arranged with a picture of a man at a campfire on a frigid day, Sara Prescott, assistant professor in the Picower Institute and MIT’s departments of Biology and Brain and Cognitive Sciences, pointed out that the brain and the body cooperate constantly just to keep us going, and that when the systems they maintain fail, the consequence is disease.
“[This man] is tightly regulating his blood pressure, glucose levels, his energy expenditure, inflammation and breathing rate, and he’s doing this in the face of a fluctuating external environment,” Prescott said. “Behind each of these processes there are networks of neurons that are working quietly in the background to maintain internal stability. And this is, of course, the brain’s oldest job.”
Indeed, although the discoveries they shared about the underlying neuroscience were new, the speakers each described experiences that are as timeless as they are familiar: the beating of the heart, the transition from hunger to satiety, and the healing of cuts on our skin.
Feeling warm and full
Li Ye, a scientist at Scripps Research, picked right up on the example of coping with the cold. Mammals need to maintain a consistent internal body temperature, and so they will increase metabolism in the cold and then, as energy supplies dwindle, seek out more food. His lab’s 2023 study identified the circuit, centered in the Xiphoid nucleus of the brain’s thalamus, that regulates this behavior by sensing prolonged cold exposure and energy consumption. Ye described other feeding mechanisms his lab is studying as well, including searching out the circuitry that regulates how long an animal will feed at a time. For instance, if you’re worried about predators finding you, it’s a bad idea to linger for a leisurely lunch.
Physiologist Zachary Knight of the University of California at San Francisco also studies feeding and drinking behaviors. In particular, his lab asks how the brain knows when it’s time to stop. The conventional wisdom is that all that’s needed is a feeling of fullness coming from the gut, but his research shows there is more to the story. A 2023 study from his lab found a population of neurons in the caudal nucleus of the solitary tract in the brain stem that receive signals about ingestion and taste from the mouth, and that send that “stop eating” signal. They also found a separate neural population in the brain stem that indeed receives fullness signals from the gut, and teaches the brain over time how much food leads to satisfaction. Both neuron types work together to regulate the pace of eating. His lab has continued to study how brain stem circuits regulate feeding using these multiple inputs.
Energy balance depends not only on how many calories come in, but also on how much energy is spent. When food is truly scarce, many animals will engage in a state of radically lowered metabolism called torpor (like hibernation), where body temperature plummets. The brain circuits that exert control over body temperature are another area of active research. In his talk, Harvard University neurologist Clifford Saper described years of research in which his lab found neurons in the median preoptic nucleus that dictate this metabolic state. Recently, his lab demonstrated that the same neurons that regulate torpor also regulate fever during sickness. When the neurons are active, body temperature drops. When they are inhibited, fever ensues. Thus, the same neurons act as a two-way switch for body temperature in response to different threatening conditions.
Sickness, injury, and stress
As the idea of fever suggests, the body also has evolved circuits (that scientists are only now dissecting) to deal with sickness and injury.
Washington University neuroscientist Qin Liu described her research into the circuits governing coughing and sneezing, which, on one hand, can clear the upper airways of pathogens and obstructions but, on the other hand, can spread those pathogens to others in the community. She described her lab’s 2024 study in which her team pinpointed a population of neurons in the nasal passages that mediate sneezing and a different population of sensory neurons in the trachea that produce coughing. Identifying the specific cells and their unique characteristics makes them potentially viable drug targets.
While Liu tackled sickness, Harvard stem cell biologist Ya-Chieh Hsu discussed how neurons can reshape the body’s tissues during stress and injury, specifically the hair and skin. While it is common lore that stress can make your hair gray and fall out, Hsu’s lab has shown the actual physiological mechanisms that make it so. In 2020 her team showed that bursts of noradrenaline from the hyperactivation of nerves in the sympathetic nervous system kills the melanocyte stem cells that give hair its color. She described newer research indicating a similar mechanism may also make hair fall out by killing off cells at the base of hair follicles, releasing cellular debris and triggering auto-immunity. Her lab has also looked at how the nervous system influences skin healing after injury. For instance, while our skin may appear to heal after a cut because it closes up, many skin cell types actually don’t rebound (unless you’re still an embryo). By looking at the difference between embryos and post-birth mice, Hsu’s lab has traced the neural mechanisms that prevent fuller healing, identifying a role for cells called fibroblasts and the nervous system.
Continuing on the theme of stress, Caltech biologist Yuki Oka discussed a broad-scale project in his lab to develop a molecular and cellular atlas of the sympathetic nervous system, which innervates much of the body and famously produces its “fight or flight” responses. In work partly published last year, their journey touched on cells and circuits involved in functions ranging from salivation to secreting bile. Oka and co-authors made the case for the need to study the system more in a review paper earlier this year.
A new model to study human biology
In their search for the best ways to understand the circuits that govern survival and homeostasis, researchers often use rodents because they are genetically tractable, easy to house, and reproduce quickly, but Stanford University biochemist Mark Krasnow has worked to develop a new model with many of those same traits but a closer genetic relationship to humans: the mouse lemur. In his talk, he described that work (which includes extensive field research in Madagascar) and focused on insights the mouse lemurs have helped him make into heart arrhythmias. After studying the genes and health of hundreds of mouse lemurs, his lab identified a family with “sick sinus syndrome,” an arrhythmia also seen in humans. In a preprint study, his lab describes the specific molecular pathways at fault in disrupting the heart’s natural pace making.
By sharing some of the latest research into how the brain and body work to stay healthy, the symposium’s speakers highlighted the most current thinking about the nervous system’s most primal purposes.
Quantum modeling for breakthroughs in materials science and sustainable energyQuantum chemist and School of Science Dean’s Postdoctoral Fellow Ernest Opoku is working on computational methods to study how electrons behave.Ernest Opoku knew he wanted to become a scientist when he was a little boy. But his school in Dadease, a small town in Ghana, offered no elective science courses — so Opoku created one for himself.
Even though they had neither a dedicated science classroom nor a lab, Opoku convinced his principal to bring in someone to teach him and five other friends he had convinced to join him. With just a chalkboard and some imagination, they learned about chemical interactions through the formulas and diagrams they drew together.
“I grew up in a town where it was difficult to find a scientist,” he says.
Today, Opoku has become one himself, recently earning a PhD in quantum chemistry from Auburn University. This year, he joins MIT as a part of the School of Science Dean’s Postdoctoral Fellowship program. Working with the Van Voorhis Group at the Department of Chemistry, Opoku’s goal is to advance computational methods to study how electrons behave — a fundamental research that underlies applications ranging from materials science to drug discovery.
“As a boy who wanted to satisfy my own curiosities at a young age, in addition to the fact that my parents had minimal formal education,” Opoku says, “I knew that the only way I would be able to accomplish my goal was to work hard.”
In pursuit of knowledge
When Opoku was 8 years old, he began independently learning English at school. He would come back with homework, but his parents were unable to help him, as neither of them could read or write in English. Frustrated, his mother asked an older student to help tutor her son.
Every day, the boys would meet at 6 o’clock. With no electricity at either of their homes, they practiced new vocabulary and pronunciations together by a kerosene lamp.
As he entered junior high school, Opoku’s fascination with nature grew.
“I realized that chemistry was the central science that really offered the insight that I wanted to really understand Creation from the smallest level,” he says.
He studied diligently and was able to get into one of Ghana’s top high schools — but his parents couldn’t afford the tuition. He therefore enrolled in Dadease Agric Senior High School in his hometown. By growing tomatoes and maize, he saved up enough money to support his education.
In 2012, he got into Kwame Nkrumah University of Science and Technology (KNUST), a first-ranking university in Ghana and the West Africa region. There, he was introduced to computational chemistry. Unlike many other branches of science, the field required only a laptop and the internet to study chemical reactions.
“Anything that comes to mind, anytime I can grab my computer and I’ll start exploring my curiosity. I don’t have to wait to go to the laboratory in order to interrogate nature,” he says.
Opoku worked from early morning to late night. None of it felt like work, though, thanks to his supervisor, the late quantum chemist Richard Tia, who was an associate professor of chemistry at KNUST.
“Every single day was a fun day,” he recalls of his time working with Tia. “I was being asked to do the things that I myself wanted to know, to satisfy my own curiosity, and by doing that I’ll be given a degree.”
In 2020, Opoku’s curiosity brought him even further, this time overseas to Auburn University in Alabama for his PhD. Under the guidance of his advisor, Professor J. V. Ortiz, Opoku contributed to the development of new computational methods to simulate how electrons bind to or detach from molecules, a process known as electron propagation.
What is new about Opoku’s approach is that it does not rely on any adjustable or empirical parameters. Unlike some earlier computational methods that require tuning to match experimental results, his technique uses advanced mathematical formulations to directly account for first principles of electron interactions. This makes the method more accurate — closely resembling results from lab experiments — while using less computational power.
By streamlining the calculations and eliminating guesswork, Opoku’s work marks a major step toward faster, more trustworthy quantum simulations across a wide range of molecules, including those never studied before — laying the groundwork for breakthroughs in many areas such as materials science and sustainable energy.
For his postdoctoral research at MIT, Opoku aims to advance electron propagator methods to address larger and more complex molecules and materials by integrating quantum computing, machine learning, and bootstrap embedding — a technique that simplifies quantum chemistry calculations by dividing large molecules into smaller, overlapping fragments. He is collaborating with Troy Van Voorhis, the Haslam and Dewey Professor of Chemistry, whose expertise in these areas can help make Opoku’s advanced simulations more computationally efficient and scalable.
“His approach is different from any of the ways that we've pursued in the group in the past,” Van Voorhis says.
Passing along the opportunity to learn
Opoku thanks previous mentors who helped him overcome the “intellectual overhead required to make contributions to the field,” and believes Van Voorhis will offer the same kind of support.
In 2021, Opoku joined the National Organization for the Professional Advancement of Black Chemists and Chemical Engineers (NOBCChE) to gain mentorship, networking, and career development opportunities within a supportive community. He later led the Auburn University chapter as president, helping coordinate k-12 outreach to inspire the next generation of scientists, engineers, and innovators.
“Opoku’s mentorship goes above and beyond what would be typical at his career stage,” says Van Voorhis. “One reason is his ability to communicate science to people, and not just the concepts of science, but also the process of science."
Back home, Opoku founded the Nesvard Institute of Molecular Sciences to support African students to develop not only skills for graduate school and professional careers, but also a sense of confidence and cultural identity. Through the nonprofit, he has mentored 29 students so far, passing along the opportunity for them to follow their curiosity and help others do the same.
“There are many areas of science and engineering to which Africans have made significant contributions, but these contributions are often not recognized, celebrated, or documented,” Opoku says.
He adds: “We have a duty to change the narrative.”
The science of consciousnessThrough the MIT Consciousness Club, professors Matthias Michel and Earl Miller are exploring how neurological activity gives rise to human experience.Humans know they exist, but how does “knowing” work? Despite all that’s been learned about brain function and the bodily processes it governs, we still don't understand where the subjective experiences associated with brain functions originate.
A new interdisciplinary project seeks to find answers to these kinds of big questions around consciousness, a fundamental yet elusive phenomenon.
The MIT Consciousness Club is co-led by philosopher Matthias Michel, the Old Dominion Career Development Professor in the Department of Linguistics and Philosophy, and Earl Miller, the Picower Professor of Neuroscience in the Department of Brain and Cognitive Sciences.
Funded by a grant from the MIT Human Insight Collaborative’s (MITHIC) SHASS+ Connectivity Fund, the MIT Consciousness Club aims to build a bridge between philosophy and cognitive (neuro)science, while also engaging the Boston area’s academic community to advance consciousness research.
“It’s possible to study this scientifically,” says Michel. “MIT positioning itself as a leader in the field would change everything.”
“Matthias takes a science-based approach to the work” Miller adds. “A coherent, fact-based, research-supported understanding of and approach to consciousness can have a massive impact on our approach to public health.”
Working together, they hope to increase access to a diverse network of researchers, improve their understanding of how consciousness works, and develop tools to measure consciousness objectively.
The MIT Consciousness Club plans to hold monthly events featuring expert talks and Q&A sessions collaborating on topics like the neural correlates of consciousness, unconscious perception, and consciousness in animals and AI systems.
“What can science tell us about brain function and consciousness?” Michel asks. “Why does neurological activity give rise to conscious experience, as opposed to nothing?”
“Cognition is your brain self-organizing,” Miller adds. “How does the brain organize itself to attain goals?” Unlike amoebae, Miller notes, humans both react to and act on the environment.
Michel’s research focuses on the philosophy of cognitive science, mind, and perception, with interests in the philosophy of measurement and philosophy of psychiatry. Most of his recent work focuses on methodological and foundational issues in the scientific study of consciousness.
Miller studies the neural basis of memory and cognition. His areas of focus include the neural mechanisms of attention, learning, and memory needed for voluntary, goal-directed behavior, with a special focus on the brain’s prefrontal cortex.
“I was engaged with how the mind works”
Before arriving at MIT in 2024, Michel’s academic and research interests led him to his work at the intersection of neuroscience and philosophy. “I was engaged with how the mind works,” he says. He describes a course of study focused on issues related to logic and reasoning and the ways the brain toggles between conscious and unconscious brain function.
Following the completion of his doctoral and postdoctoral studies, he continued his investigation into the nature of consciousness. Work from Melvyn Goodale at Western University led to a light-bulb moment for him.
“According to Goodale, the brain operates with two visual systems — conscious and non-conscious — responsible for fine-grained motor commands,” he says. “Researchers discovered the way someone adjusts their grip, for example, is based on a non-conscious stream of vision.”
This discovery helped further Michel’s commitment to understanding consciousness’s function objectively. “How long does it take a person to become conscious of something?” he asks. “There is a lag between when a signal is presented and when we get to subjectively experience it.” Measuring that delay, and understanding the path from stimulus to signal processing and response, is a core facet of Michel’s investigation. Consciousness, he asserts, is for planning, not reacting.
Michel and Miller aren’t only interested in human brains. Improved understanding of animal and other living things’ consciousness are also under discussion. “How do you organize states of consciousness in nonhuman species?” Michel asks. Understanding how species interact with the world can help us understand it and them better.
Making room for investigation and collaboration
One of the surprising discoveries both uncovered while shaping the idea that would become the MIT Consciousness Club is the size of the group interested in participating. “It’s larger than I thought,” Miller says. “We’ve established connections with colleagues at the Lincoln Laboratory and Northeastern University, all of whom are invested in studying consciousness.”
Both Michel and Miller believe researchers at MIT and elsewhere can benefit from the kind of collaboration MITHIC funding makes possible. “The goal is to create community,” Michel says, “while also improving the research area’s reputation.”
“It’s possible to study consciousness scientifically because of its connection to other questions,” Miller adds.
The investigative avenues available when you can explore ideas for their own sake — like how consciousness functions, for example — can lead to exciting breakthroughs. “Imagine if consciousness research became a focus area, rather than a sideline, for people interested in its study,” Michel says.
“You can’t study the complexities of executive [brain] function and not get to consciousness,” Miller continues. “Designing a system to effectively and accurately measure consciousness levels in the brain has a variety of potentially groundbreaking applications.”
Miller works with Emery Brown, the Edward Hood Taplin Professor of Medical Engineering and Computational Neuroscience at MIT and a practicing anesthesiologist at Massachusetts General Hospital for whom consciousness is a central concern.
“General anesthesia during surgical procedures is bad when you’re really young or really old,” Miller says. “Older people who need anesthesia can experience cognitive decline, which is why health-care providers are often reluctant to perform surgeries despite needing it.” A better understanding of the mechanisms that create consciousness can help improve pre- and post-surgical care delivery and outcomes.
Researching consciousness can also yield substantial public health benefits, including more-efficient mental health treatment. “Mental health disorders affect high-level cognitive function,” Miller continues. “Anesthesia interacts with drugs used to treat mental health disorders, which can severely impact patient care.” Each of the researchers wants to understand how drug therapies actually mediate patient experiences.
Ultimately, the professors agree that improved access to consciousness studies will improve research rigor and help burnish the field’s reputation.
MIT Energy Initiative conference spotlights research priorities amidst a changing energy landscapeIndustry leaders agree collaboration is key to advancing critical technologies.“We’re here to talk about really substantive changes, and we want you to be a participant in that,” said Desirée Plata, the School of Engineering Distinguished Professor of Climate and Energy in MIT’s Department of Civil and Environmental Engineering, at Energizing@MIT: the MIT Energy Initiative’s (MITEI) Annual Research Conference that was held on Sept. 9-10.
Plata’s words resonated with the 150-plus participants from academia, industry, and government meeting in Cambridge for the conference, whose theme was “tackling emerging energy challenges.” Meeting such challenges and ultimately altering the trajectory of global climate outcomes requires partnerships, speakers agreed.
“We have to be humble and open,” said Giacomo Silvestri, chair of Eniverse Ventures at Eni, in a shared keynote address. “We cannot develop innovation just focusing on ourselves and our competencies … so we need to partner with startups, venture funds, universities like MIT and other public and private institutions.”
Added his Eni colleague, Annalisa Muccioli, head of research and technology, “The energy transition is a race we can win only by combining mature solutions ready to deploy, together with emerging technologies that still require acceleration and risk management.”
Research targets
In a conference that showcased a suite of research priorities MITEI has identified as central to ensuring a low-carbon energy future, participants shared both promising discoveries and strategies for advancing proven technologies in the face of shifting political winds and policy uncertainties.
One panel focused on grid resiliency — a topic that has moved from the periphery to the center of energy discourse as climate-driven disruptions, cyber threats, and the integration of renewables challenge legacy systems. A dramatic case in point: the April 2025 outage in Spain and Portugal that left millions without power for eight to 15 hours.
“I want to emphasize that this failure was about more than the power system,” said MITEI research scientist Pablo Duenas-Martinez. While he pinpointed technical problems with reactive power and voltage control behind the system collapse, Duenas-Martinez also called out a lack of transmission capacity with Central Europe and out-of-date operating procedures, and recommended better preparation and communication among transmission systems and utility operators.
“You can’t plan for every single eventuality, which means we need to broaden the portfolio of extreme events we prepare for,” noted Jennifer Pearce, vice president at energy company Avangrid. “We are making the system smarter, stronger, and more resilient to better protect from a wide range of threats such as storms, flooding, and extreme heat events.” Pearce noted that Avangrid’s commitment to deliver safe, reliable power to its customers necessitates “meticulous emergency planning procedures.”
The resiliency of the electric grid under greatly increased demand is an important motivation behind MITEI’s September 2025 launch of the Data Center Power Forum, which was also announced during the annual research conference. The forum will include research projects, webinars, and other content focused on energy supply and storage, grid design and management, infrastructure, and public and economic policy related to data centers. The forum’s members include MITEI companies that also participate in MIT’s Center for Environmental and Energy Policy Research (CEEPR).
Storage and transportation: Staggering challenges
Meeting climate goals to decarbonize the world by 2050 requires building around 300 terawatt-hours of storage, according to Asegun Henry, a professor in the MIT Department of Mechanical Engineering. “It’s an unbelievably enormous problem people have to wrap their minds around,” he said. Henry has been developing a high-temperature thermal energy storage system he has nicknamed “sun in a box.” His system uses liquid metal and graphite to hold electricity as heat and then convert it back to electricity, enabling storage anywhere from five to 500 hours.
“At the end of the day, storage provides a service, and the type of technology that you need is a function of the service that you value the most,” said Nestor Sepulveda, commercial lead for advanced energy investments and partnerships at Google. “I don't think there is one winner-takes-all type of market here.”
Another panel explored sustainable fuels that could help decarbonize hard-to-electrify sectors like aviation, shipping, and long-haul trucking. Randall Field, MITEI’s director of research, noted that sustainably produced drop-in fuels — fuels that are largely compatible with existing engines — “could eliminate potentially trillions of dollars of cost for fleet replacement and for infrastructure build-out, while also helping us to accelerate the rate of decarbonization of the transportation sectors."
Erik G. Birkerts is the chief growth officer of LanzaJet, which produces a drop-in, high-energy-density aviation fuel derived from agricultural residue and other waste carbon sources. “The key to driving broad sustainable aviation fuel adoption is solving both the supply-side challenge through more production and the demand-side hurdle by reducing costs,” he said.
“We think a good policy framework [for sustainable fuels] would be something that is technology-neutral, does not exclude any pathways to produce, is based on life cycle accounting practices, and on market mechanisms,” said Veronica L. Robertson, energy products technology portfolio manager at ExxonMobil.
MITEI plans a major expansion of its research on sustainable fuels, announcing a two-year study, “The future of fuels: Pathways to sustainable transportation,” starting in early 2026. According to Field, the study will analyze and assess biofuels and e-fuels.
Solutions from labs big and small
Global energy leaders offered glimpses of their research projects. A panel on carbon capture in power generation featured three takes on the topic: Devin Shaw, commercial director of decarbonization technologies at Shell, described post-combustion carbon capture in power plants using steam for heat recovery; Jan Marsh, a global program lead at Siemens Energy, discussed deploying novel materials to capture carbon dioxide directly from the air; and Jeffrey Goldmeer, senior director of technology strategy at GE Vernova, explained integrating carbon capture into gas-powered turbine systems.
During a panel on vehicle electrification, Brian Storey, vice president of energy and materials at the Toyota Research Institute, provided an overview of Toyota’s portfolio of projects for decarbonization, including solid-state batteries, flexible manufacturing lines, and grid-forming inverters to support EV charging infrastructure.
A session on MITEI seed fund projects revealed promising early-stage research inside MIT’s own labs. A new process for decarbonizing the production of ethylene was presented by Yogesh Surendranath, Donner Professor of Science in the MIT Department of Chemistry. Materials Science and Engineering assistant professor Aristide Gumyusenge also discussed the development of polymers essential for a new kind of sodium-ion battery.
Shepherding bold, new technologies like these from academic labs into the real world cannot succeed without ample support and deft management. A panel on paths to commercialization featured the work of Iwnetim Abate, Chipman Career Development Professor and assistant professor in the MIT Department of Materials Science and Engineering, who has spun out a company, Addis Energy, based on a novel geothermal process for harvesting clean hydrogen and ammonia from subsurface, iron-rich rocks. Among his funders: ARPA-E and MIT’s own The Engine Ventures.
The panel also highlighted the MIT Proto Ventures Program, an initiative to seize early-stage MIT ideas and unleash them as world-changing startups. “A mere 4.2 percent of all the patents that are actually prosecuted in the world are ever commercialized, which seems like a shocking number,” said Andrew Inglis, an entrepreneur working with Proto Ventures to translate geothermal discoveries into businesses. “Can’t we do this better? Let’s do this better!”
Geopolitical hazards
Throughout the conference, participants often voiced concern about the impacts of competition between the United States and China. Kelly Sims Gallagher, dean of the Fletcher School at Tufts University and an expert on China’s energy landscape, delivered the sobering news in her keynote address: “U.S. competitiveness in low-carbon technologies has eroded in nearly every category,” she said. “The Chinese are winning the clean tech race.”
China enjoys a 51 percent share in global wind turbine manufacture and 75 percent in solar modules. It also controls low-carbon supply chains that much of the world depends on. “China is getting so dominant that nobody can carve out a comparative advantage in anything,” said Gallagher. “China is just so big, and the scale is so huge that the Chinese can truly conquer markets and make it very hard for potential competitors to find a way in.”
And for the United States, the problem is “the seesaw of energy policy,” she says. “It’s incredibly difficult for the private sector to plan and to operate, given the lack of predictability and policy here.”
Nevertheless, Gallagher believes the United States still has a chance of at least regaining competitiveness, by setting up a stable, bipartisan energy policy, rebuilding domestic manufacturing and supply chains; providing consistent fiscal incentives; attracting and retaining global talent; and fostering international collaboration.
The conference shone a light on one such collaboration: a China-U.S. joint venture to manufacture lithium iron phosphate batteries for commercial vehicles in the United States. The venture brings together Eve Energy, a Chinese battery technology and manufacturing company; Daimler, a global commercial vehicle manufacturer; PACCAR Inc., a U.S.-based truck manufacturer; and Accelera, the zero-emissions business of Cummins Inc. “Manufacturing batteries in the U.S. makes the supply chain more robust and reduces geopolitical risks,” said Mike Gerty, of PACCAR.
While she acknowledged the obstacles confronting her colleagues in the room, Plata nevertheless concluded her remarks as a panel moderator with some optimism: “I hope you all leave this conference and look back on it in the future, saying I was in the room when they actually solved some of the challenges standing between now and the future that we all wish to manifest.”
Introducing the MIT-GE Vernova Climate and Energy AllianceFive-year collaboration between MIT and GE Vernova aims to accelerate the energy transition and scale new innovations.MIT and GE Vernova launched the MIT-GE Vernova Energy and Climate Alliance on Sept. 15, a collaboration to advance research and education focused on accelerating the global energy transition.
Through the alliance — an industry-academia initiative conceived by MIT Provost Anantha Chandrakasan and GE Vernova CEO Scott Strazik — GE Vernova has committed $50 million over five years in the form of sponsored research projects and philanthropic funding for research, graduate student fellowships, internships, and experiential learning, as well as professional development programs for GE Vernova leaders.
“MIT has a long history of impactful collaborations with industry, and the collaboration between MIT and GE Vernova is a shining example of that legacy,” said Chandrakasan in opening remarks at a launch event. “Together, we are working on energy and climate solutions through interdisciplinary research and diverse perspectives, while providing MIT students the benefit of real-world insights from an industry leader positioned to bring those ideas into the world at scale.”
The energy of change
An independent company since its spinoff from GE in April 2024, GE Vernova is focused on accelerating the global energy transition. The company generates approximately 25 percent of the world’s electricity — with the world’s largest installed base of over 7,000 gas turbines, about 57,000 wind turbines, and leading-edge electrification technology.
GE Vernova’s slogan, “The Energy of Change,” is reflected in decisions such as locating its headquarters in Cambridge, Massachusetts — in close proximity to MIT. In pursuing transformative approaches to the energy transition, the company has identified MIT as a key collaborator.
A key component of the mission to electrify and decarbonize the world is collaboration, according to CEO Scott Strazik. “We want to inspire, and be inspired by, students as we work together on our generation’s greatest challenge, climate change. We have great ambition for what we want the world to become, but we need collaborators. And we need folks that want to iterate with us on what the world should be from here.”
Representing the Healey-Driscoll administration at the launch event were Massachusetts Secretary of Energy and Environmental Affairs Rebecca Tepper and Secretary of the Executive Office of Economic Development Eric Paley. Secretary Tepper highlighted the Mass Leads Act, a $1 billion climate tech and life sciences initiative enacted by Governor Maura Healey last November to strengthen Massachusetts’ leadership in climate tech and AI.
“We're harnessing every part of the state, from hydropower manufacturing facilities to the blue-to-blue economy in our south coast, and right here at the center of our colleges and universities. We want to invent and scale the solutions to climate change in our own backyard,” said Tepper. “That’s been the Massachusetts way for decades.”
Real-world problems, insights, and solutions
The launch celebration featured interactive science displays and student presenters introducing the first round of 13 research projects led by MIT faculty. These projects focus on generating scalable solutions to our most pressing challenges in the areas of electrification, decarbonization, renewables acceleration, and digital solutions. Read more about the funded projects here.
Collaborating with industry offers the opportunity for researchers and students to address real-world problems informed by practical insights. The diverse, interdisciplinary perspectives from both industry and academia will significantly strengthen the research supported through the GE Vernova Fellowships announced at the launch event.
“I’m excited to talk to the industry experts at GE Vernova about the problems that they work on,” said GE Vernova Fellow Aaron Langham. “I’m looking forward to learning more about how real people and industries use electrical power.”
Fellow Julia Estrin echoed a similar sentiment: “I see this as a chance to connect fundamental research with practical applications — using insights from industry to shape innovative solutions in the lab that can have a meaningful impact at scale.”
GE Vernova’s commitment to research is also providing support and inspiration for fellows. “This level of substantive enthusiasm for new ideas and technology is what comes from a company that not only looks toward the future, but also has the resources and determination to innovate impactfully,” says Owen Mylotte, a GE Vernova Fellow.
The inaugural cohort of eight fellows will continue their research at MIT with tuition support from GE Vernova. Find the full list of fellows and their research topics here.
Pipeline of future energy leaders
Highlighting the alliance’s emphasis on cultivating student talent and leadership, GE Vernova CEO Scott Strazik introduced four MIT alumni who are now leaders at GE Vernova: Dhanush Mariappan SM ’03, PhD ’19, senior engineering manager in the GE Vernova Advanced Research Center; Brent Brunell SM ’00, technology director in the Advanced Research Center; Paolo Marone MBA ’21, CFO of wind; and Grace Caza MAP ’22, chief of staff in supply chain and operations.
The four shared their experiences of working with MIT as students and their hopes for the future of this alliance in the realm of “people development,” as Mariappan highlighted. “Energy transition means leaders. And every one of the innovative research and professional education programs that will come out of this alliance is going to produce the leaders of the energy transition industry.”
The alliance is underscoring its commitment to developing future energy leaders by supporting the New Engineering Education Transformation program (NEET) and expanding opportunities for student internships. With 100 new internships for MIT students announced in the days following the launch, GE Vernova is opening broad opportunities for MIT students at all levels to contribute to a sustainable future.
“GE Vernova has been a tremendous collaborator every step of the way, with a clear vision of the technical breakthroughs we need to affect change at scale and a deep respect for MIT’s strengths and culture, as well as a hunger to listen and learn from us as well,” said Betar Gallant, alliance director who is also the Kendall Rohsenow Associate Professor of Mechanical Engineering at MIT. “Students, take this opportunity to learn, connect, and appreciate how much you’re valued, and how bright your futures are in this area of decarbonizing our energy systems. Your ideas and insight are going to help us determine and drive what’s next.”
Daring to create the future we want
The launch event transformed MIT’s Lobby 13 with green lighting and animated conversation around the posters and hardware demos on display, reflecting the sense of optimism for the future and the type of change the alliance — and the Commonwealth of Massachusetts — seeks to advance.
“Because of this collaboration and the commitment to the work that needs doing, many things will be created,” said Secretary Paley. “People in this room will work together on all kinds of projects that will do incredible things for our economy, for our innovation, for our country, and for our climate.”
The alliance builds on MIT’s growing portfolio of initiatives around sustainable energy systems, including the Climate Project at MIT, a presidential initiative focused on developing solutions to some of the toughest barriers to an effective global climate response. “This new alliance is a significant opportunity to move the needle of energy and climate research as we dare to create the future that we want, with the promise of impactful solutions for the world,” said Evelyn Wang, MIT vice president for energy and climate, who attended the launch.
To that end, the alliance is supporting critical cross-institution efforts in energy and climate policy, including funding three master’s students in MIT Technology and Policy Program and hosting an annual symposium in February 2026 to advance interdisciplinary research. GE Vernova is also providing philanthropic support to the MIT Human Insight Collaborative. For 2025-26, this support will contribute to addressing global energy poverty by supporting the MIT Abdul Latif Jameel Poverty Action Lab (J-PAL) in its work to expand access to affordable electricity in South Africa.
“Our hope to our fellows, our hope to our students is this: While the stakes are high and the urgency has never been higher, the impact that you are going to have over the decades to come has never been greater,” said Roger Martella, chief corporate and sustainability officer at GE Vernova. “You have so much opportunity to move the world in a better direction. We need you to succeed. And our mission is to serve you and enable your success.”
With the alliance’s launch — and GE Vernova’s new membership in several other MIT consortium programs related to sustainability, automation and robotics, and AI, including the Initiative for New Manufacturing, MIT Energy Initiative, MIT Climate and Sustainability Consortium, and Center for Transportation and Logistics — it’s evident why Betar Gallant says the company is “all-in at MIT.”
The potential for tremendous impact on the energy industry is clear to those involved in the alliance. As GE Vernova Fellow Jack Morris said at the launch, “This is the beginning of something big.”
Q&A: On the ethics of catastropheJack Carson, an MIT second-year undergraduate and EECS major, is the recent winner of the Elie Wiesel Prize in Ethics.At first glimpse, student Jack Carson might appear too busy to think beyond his next problem set, much less tackle major works of philosophy. The second-year undergraduate, who plans to double major in electrical engineering with computing and mathematics, has been both an officer in Impact@MIT and a Social and Ethical Responsibility in Computing (SERC) Fellow in the MIT Schwarzman College of Computer Science — and is an active member of Concourse.
But this fall, Carson was awarded first place in the Elie Wiesel Prize in Ethics Essay Contest for his entry, “We Know Only Men: Reading Emmanuel Levinas On The Rez,” a comparative exploration of Jewish and Cherokee ethical thought. The deeply researched essay links Carson’s hometown in Adair County, Oklahoma, to the village of Le Chambon sur Lignon, France, and attempts to answer the question: “What is to be done after catastrophe?” Carson explains in this interview.
Q: The prompt for your entry in the Elie Wiesel Prize in Ethics Essay Contest was: “What challenges awaken your conscience? Is it the conflicts in American society? An international crisis? Maybe a difficult choice you currently face or a hard decision you had to make?” How did you land on the topic you’d write about?
A: It was really an insight that just came to me as I struggled with reading Levinas, who is notoriously challenging. The Talmud is a tradition very far from my own, but, as I read Levinas’ lectures on the Talmud, I realized that his project is one that I can relate to: preserving a culture that has been completely displaced, where not destroyed. The more I read of Levinas’ work the more I realized that his philosophy of radical alterity — that you must act when confronted with another person who you can never really comprehend — arose naturally from his efforts to show how to preserve Jewish cultural continuity. In the same if less articulated way, the life I’ve witnessed in Eastern Oklahoma has led people to “act first, think later” — to use a Levinasian term. So it struck me that similar situations of displaced cultures had led to a similar ethical approach. Given that Levinas was writing about Jewish life in Eastern Europe and I was immersed in a heavily Native American culture, the congruence of the two ethical approaches seemed surprising. I thought, perhaps rightly, that it showed something essentially human that could be abstracted away from the very different cultural settings.
Q: Your entry for the contest is a meditation on the ethical similarities between ga-du-gi, the Cherokee concept of communal effort toward the betterment of all; the actions of the Huguenot inhabitants of the French village of Le Chambon sur Lignon (who protected thousands of Jewish refugees during Nazi occupation); and the Jewish philosopher Emmanuel Levinas’ interpretation of the Talmud, which essentially posits that action must come first in an ethical framework, not second. Did you find your own personal philosophy changing as a result of engaging with these ideas — or, perhaps more appropriately — have you noticed your everyday actions changing?
A: Yes, definitely my personal philosophy has been affected by thinking through Levinas’ demanding approach. Like a lot of people, I sit around thinking through what ethical approach I prefer. Should I be a utilitarian? A virtue theorist? A Kantian? Something else? Levinas had no time for this. He urged acting, not thinking, when confronted with human need. I wrote about the resistance movement of Le Chambon because those brave citizens also just acted without thinking — in a very Levinasian way. That seems a strange thing to valorize, as we are often taught to think before you act, and this is probably good advice! But sometimes you can think your way right out of helping people in need.
Levinas instructed that you should act in the face of the overwhelming need of what he would call the “Other.” That’s a rather intimidating term, but I read it as meaning just “other people.” The Le Chambon villagers, who protected Jews fleeing the Nazis, and the Cherokees lived this, acting in an almost pre-theoretical way in helping people in need that is really quite beautiful. And for Levinas, I’d note that the problematic word is “because.” And I wrote about how “because” is indeed a thin reed that the murderers will always break.
Put a little differently, “because” suggests that you have to have “reasons” that complete the phrase and make it coherent. This might seem almost a matter of logic. But Levinas says no. Because the genocide starts when the reasons are attacked. For example, you might believe we should help some persecuted group “because” they are really just like you and me. And that’s true, of course. But Levinas knows that the killers always start by dehumanizing their targets, so they convince you that the victims are not really like you at all, but are more like “vermin” or “insects.” So the “because” condition fails, and that’s when the murdering starts. So you should just act and then think, says Levinas, and this immunizes you from that rhetorical poison. It’s a counterintuitive idea, but powerful when you really think about it.
Q: You open with a particularly striking question: What is to be done after catastrophe? Do you feel more sure of your answer, now that you’ve deeply considered these disparate response to a catastrophic event — or do you have more questions?
A: I am still not sure what to do after world-historical catastrophes like genocides. I guess I’d say there is nothing to do — other than maintain a kind of radical hope that has no basis in evidence. “Catastrophes” like those I write about — the Holocaust, the Trail of Tears — are more than just acts of physical destruction. They destroy whole ways of being and uproot whole systems of meaning-making. Cultural concepts become void overnight, as their preconditions are destroyed.
There is a great book by Jonathan Lear called “Radical Hope.” It begins with a discussion of a Plains Indian leader named Plenty Coups. After removal to the reservation in the 19th century, he is quoted as saying, “But when the buffalo went away the hearts of my people fell to the ground, and they could not lift them up again. After this nothing happened.” Lear ponders what that last sentence is all about. What did Plenty Coups mean when he said “after this nothing happened?” Obviously, life’s daily activities still happened: births, deaths, eating, drinking, and such. So what does it mean? It’s perplexing. In the end, Lear concludes that Plenty Coups was making an ontological statement, in which he meant that all of the things that gave life meaning — all of those things that make the word “happen” actually signify something — had been erased. Events occurred, but didn’t “happen” because they fell into a world that to Plenty Coups lacked any sense at all. And Plenty Coups was not wrong about this; for him and his people, the world lost intelligibility. Nonetheless, Plenty Coups continued to lead his people, even amidst great deprivation, even though he never found a new basis for belief. He only had “radical hope” — which gave Lears’ book its name — that some new way of life might arise over time. I guess my answer to “what happens after catastrophe?” is just, well, “nothing happens” in the sense Plenty Coups meant it. And “radical hope” is all you get, if anything.
Q: There’s a memorable scene in your essay in which, during a visit to your community cemetery near Stilwell, your grandfather points out the burial plots that hold both your ancestors, and that will eventually hold him and you. You describe this moment beautifully as a comforting and connective chain linking you to both past and future communities. How does being part of that chain shape your life?
A: I feel this sense of knowing where you will be buried — alongside all of your ancestors — is a great gift. That sounds a little odd, but it gives a rootedness that is very removed from most people’s experience today. And the cemetery is just a stand-in for a whole cultural structure that gives me a sense of role and responsibility. The lack of these, I think, creates a real sense of alienation, and this alienation is the condition of our age. So I feel lucky to have a strong sense of place and a place that will always be home. Lincoln talked about the “mystic chords of memory.” I feel this very mystical attachment to Oklahoma. The idea that this road or this community is one where every member of your family for generations has lived — or even if they moved away, always considered “home” — is very powerful. It always gives an answer to “Who are you?” That’s a hard question, but I can always say, “We are from Adair County,” and this is a sufficient answer. And back home, people would instantly nod their heads at the adequacy of this response. As I said, it’s a little mystical, but maybe that’s a strength, not a weakness.
Q: People might be surprised to learn that the winner of an essay contest focusing on ethics is actually not an English or philosophy major, but is instead in EECS. What areas and current issues in the field do you find interesting from an ethical perspective?
A: I think the pace of technological change — and society’s struggle to keep up — shows you how important philosophy, literature, history, and the liberal arts really are. Whether it’s algorithmic bias affecting real lives, or questions about what values we encode in AI systems, these aren’t just technical problems, but fundamentally about who we are and what we owe each other. It is true that I’m majoring in 6-5 [electrical engineering with computing] and 18 [mathematics], and of course these disciplines are extraordinarily important. But the humanities are something very important to me, as they do answer fundamental questions about who we are, what we owe to others, why people act this way or that, and how we should think through social issues. I despair when I hear brilliant engineers say they read nothing longer than a blog post. If anything, the humanities should be more important overall at MIT.
When I was younger, I just happened across a discussion of CP Snow’s famous essay on the “Two Cultures.” In it, he talks about his scientist friends who had never read Shakespeare, and his literary friends who couldn’t explain thermodynamics. In a modest way, I’ve always thought that I’d like my education to be one that allowed me to participate in the two cultures. The essay on Levinas is my attempt to pursue this type of education.
Study suggests 40Hz sensory stimulation may benefit some Alzheimer’s patients for yearsFive volunteers received 40Hz stimulation for around two years after an early-stage clinical study. Those with late-onset Alzheimer’s performed better on assessments than Alzheimer’s patients outside the trial.A new research paper documents the outcomes of five volunteers who continued to receive 40Hz light and sound stimulation for around two years after participating in an MIT early-stage clinical study of the potential Alzheimer’s disease (AD) therapy. The results show that for the three participants with late-onset Alzheimer’s disease, several measures of cognition remained significantly higher than comparable Alzheimer’s patients in national databases. Moreover, in the two late-onset volunteers who donated plasma samples, levels of Alzheimer’s biomarker tau proteins were significantly decreased.
The three volunteers who experienced these benefits were all female. The two other participants, each of whom were males with early-onset forms of the disease, did not exhibit significant benefits after two years. The dataset, while small, represents the longest-term test so far of the safe, noninvasive treatment method (called GENUS, for gamma entrainment using sensory stimuli), which is also being evaluated in a nationwide clinical trial run by MIT-spinoff company Cognito Therapeutics.
“This pilot study assessed the long-term effects of daily 40Hz multimodal GENUS in patients with mild AD,” the authors wrote in an open-access paper in Alzheimer's & Dementia: The Journal of the Alzheimer’s Association. “We found that daily 40Hz audiovisual stimulation over 2 years is safe, feasible, and may slow cognitive decline and biomarker progression, especially in late-onset AD patients.”
Diane Chan, a former research scientist in The Picower Institute for Learning and Memory and a neurologist at Massachusetts General Hospital, is the study’s lead and co-corresponding author. Picower Professor Li-Huei Tsai, director of The Picower Institute and the Aging Brain Initiative at MIT, is the study’s senior and co-corresponding author.
An “open label” extension
In 2020, MIT enrolled 15 volunteers with mild Alzheimer’s disease in an early-stage trial to evaluate whether an hour a day of 40Hz light and sound stimulation, delivered via an LED panel and speaker in their homes, could deliver clinically meaningful benefits. Several studies in mice had shown that the sensory stimulation increases the power and synchrony of 40Hz gamma frequency brain waves, preserves neurons and their network connections, reduces Alzheimer’s proteins such as amyloid and tau, and sustains learning and memory. Several independent groups have also made similar findings over the years.
MIT’s trial, though cut short by the Covid-19 pandemic, found significant benefits after three months. The new study examines outcomes among five volunteers who continued to use their stimulation devices on an “open label” basis for two years. These volunteers came back to MIT for a series of tests 30 months after their initial enrollment. Because four participants started the original trial as controls (meaning they initially did not receive 40Hz stimulation), their open label usage was six to nine months shorter than the 30-month period.
The testing at zero, three, and 30 months of enrollment included measurements of their brain wave response to the stimulation, MRI scans of brain volume, measures of sleep quality, and a series of five standard cognitive and behavioral tests. Two participants gave blood samples. For comparison to untreated controls, the researchers combed through three national databases of Alzheimer’s patients, matching thousands of them on criteria such as age, gender, initial cognitive scores, and retests at similar time points across a 30-month span.
Outcomes and outlook
The three female late-onset Alzheimer’s volunteers showed improvement or slower decline on most of the cognitive tests, including significantly positive differences compared to controls on three of them. These volunteers also showed increased brain-wave responsiveness to the stimulation at 30 months and showed improvement in measures of circadian rhythms. In the two late-onset volunteers who gave blood samples, there were significant declines in phosphorylated tau (47 percent for one and 19.4 percent for the other) on a test recently approved by the U.S. Food and Drug Administration as the first plasma biomarker for diagnosing Alzheimer’s.
“One of the most compelling findings from this study was the significant reduction of plasma pTau217, a biomarker strongly correlated with AD pathology, in the two late-onset patients in whom follow-up blood samples were available,” the authors wrote in the journal. “These results suggest that GENUS could have direct biological impacts on Alzheimer’s pathology, warranting further mechanistic exploration in larger randomized trials.”
Although the initial trial results showed preservation of brain volume at three months among those who received 40Hz stimulation, that was not significant at the 30-month time point. And the two male early-onset volunteers did not show significant improvements on cognitive test scores. Notably, the early onset patients showed significantly reduced brain-wave responsiveness to the stimulation.
Although the sample is small, the authors hypothesize that the difference between the two sets of patients is likely attributable to the difference in disease onset, rather than the difference in gender.
“GENUS may be less effective in early onset Alzheimer’s disease patients, potentially owing to broad pathological differences from late-onset Alzheimer’s disease that could contribute to differential responses,” the authors wrote. “Future research should explore predictors of treatment response, such as genetic and pathological markers.”
Currently, the research team is studying whether GENUS may have a preventative effect when applied before disease onset. The new trial is recruiting participants aged 55-plus with normal memory who have or had a close family member with Alzheimer's disease, including early-onset.
In addition to Chan and Tsai, the paper’s other authors are Gabrielle de Weck, Brennan L. Jackson, Ho-Jun Suk, Noah P. Milman, Erin Kitchener, Vanesa S. Fernandez Avalos, MJ Quay, Kenji Aoki, Erika Ruiz, Andrew Becker, Monica Zheng, Remi Philips, Rosalind Firenze, Ute Geigenmüller, Bruno Hammerschlag, Steven Arnold, Pia Kivisäkk, Michael Brickhouse, Alexandra Touroutoglou, Emery N. Brown, Edward S. Boyden, Bradford C. Dickerson, and Elizabeth B. Klerman.
Funding for the research came from the Freedom Together Foundation, the Robert A. and Renee E. Belfer Family Foundation, the Eleanor Schwartz Charitable Foundation, the Dolby Family, Che King Leo, Amy Wong and Calvin Chin, Kathleen and Miguel Octavio, the Degroof-VM Foundation, the Halis Family Foundation, Chijen Lee, Eduardo Eurnekian, Larry and Debora Hilibrand, Gary Hua and Li Chen, Ko Hahn Family, Lester Gimpelson, David B Emmes, Joseph P. DiSabato and Nancy E. Sakamoto, Donald A. and Glenda G. Mattes, the Carol and Gene Ludwig Family Foundation, Alex Hu and Anne Gao, Elizabeth K. and Russell L. Siegelman, the Marc Haas Foundation, Dave and Mary Wargo, James D. Cook, and the Nobert H. Hardner Foundation.
John Marshall and Erin Kara receive postdoctoral mentoring awardFaculty recognized for the exceptional professional and personal guidance they provide postdocs.Shining a light on the critical role of mentors in a postdoc’s career, the MIT Postdoctoral Association presented the fourth annual Excellence in Postdoctoral Mentoring Awards to professors John Marshall and Erin Kara.
The awards honor faculty and principal investigators who have distinguished themselves across four areas: the professional development opportunities they provide, the work environment they create, the career support they provide, and their commitment to continued professional relationships with their mentees.
They were presented at the annual Postdoctoral Appreciation event hosted by the Office of the Vice President for Research (VPR), on Sept. 17.
An MIT Postdoctoral Association (PDA) committee, chaired this year by Danielle Coogan, oversees the awards process in coordination with VPR and reviews nominations by current and former postdocs. “[We’re looking for] someone who champions a researcher, a trainee, but also challenges them,” says Bettina Schmerl, PDA president in 2024-25. “Overall, it’s about availability, reasonable expectations, and empathy. Someone who sees the postdoctoral scholar as a person of their own, not just someone who is working for them.” Marshall’s and Kara’s steadfast dedication to their postdocs set them apart, she says.
Speaking at the VPR resource fair during National Postdoc Appreciation Week, Vice President for Research Ian Waitz acknowledged “headwinds” in federal research funding and other policy issues, but urged postdocs to press ahead in conducting the very best research. “Every resource in this room is here to help you succeed in your path,” he said.
Waitz also commented on MIT’s efforts to strengthen postdoctoral mentoring over the last several years, and the influence of these awards in bringing lasting attention to the importance of mentoring. “The dossiers we’re getting now to nominate people [for the awards] may have five, 10, 20 letters of support,” he noted. “What we know about great mentoring is that it carries on between academic generations. If you had a great mentor, then you are more likely to be an amazing mentor once you’ve seen it demonstrated.”
Ann Skoczenski, director of MIT Postdoctoral Services, works closely with Waitz and the Postdoctoral Association to address the goals and concerns of MIT’s postdocs to ensure a successful experience at the Institute. “The PDA and the whole postdoctoral community do critical work at MIT, and it’s a joy to recognize them and the outstanding mentors who guide them,” said Skoczenski.
A foundation in good science
The awards recognize excellent mentors in two categories. Marshall, professor of oceanography in the Department of Earth, Atmospheric and Planetary Sciences, received the “Established Mentor Award.”
Nominators described Marshall’s enthusiasm for research as infectious, creating an exciting work environment that sets the tone. “John’s mentorship is unique in that he immerses his mentees in the heart of cutting-edge research. His infectious curiosity and passion for scientific excellence make every interaction with him a thrilling and enriching experience,” one postdoc wrote.
At the heart of Marshall’s postdoc relationships is a straightforward focus on doing good science and working alongside postdocs and students as equals. As one nominator wrote, “his approach is centered on empowering his mentees to assume full responsibility for their work, engage collaboratively with colleagues, and make substantial contributions to the field of science.”
His high expectations are matched by the generous assistance he provides his postdocs when needed. “He balances scientific rigor with empathy, offers his time generously, and treats his mentees as partners in discovery,” a nominator wrote.
Navigating career decisions and gaining the right experience along the way are important aspects of the postdoc experience. “When it was time for me to move to a different step in my career, John offered me the opportunities to expand my skills by teaching, co-supervising PhD students, working independently with other MIT faculty members, and contributing to grant writing,” one postdoc wrote.
Marshall’s research group has focused on ocean circulation and coupled climate dynamics involving interactions between motions on different scales, using theory, laboratory experiments, observations and innovative approaches to global ocean modeling.
“I’ve always told my postdocs, if you do good science, everything will sort itself out. Just do good work,” Marshall says. “And I think it’s important that you allow the glory to trickle down.”
Marshall sees postdoc appointments as a time they can learn to play to their strengths while focusing on important scientific questions. “Having a great postdoc [working] with you and then seeing them going on to great things, it’s such a pleasure to see them succeed,” he says.
“I’ve had a number of awards. This one means an awful lot to me, because the students and the postdocs matter as much as the science.”
Supporting the whole person
Kara, associate professor of physics, received the “Early Career Mentor Award.”
Many nominators praised Kara’s ability to give advice based on her postdocs’ individual goals. “Her mentoring style is carefully tailored to the particular needs of every individual, to accommodate and promote diverse backgrounds while acknowledging different perspectives, goals, and challenges,” wrote one nominator.
Creating a welcoming and supportive community in her research group, Kara empowers her postdocs by fostering their independence. “Erin’s unique approach to mentorship reminds us of the joy of pursuing our scientific curiosities, enables us to be successful researchers, and prepares us for the next steps in our chosen career path,” said one. Another wrote, “Rather than simply giving answers, she encourages independent thinking by asking the right questions, helping me to arrive at my own solutions and grow as a researcher.”
Kara’s ability to offer holistic, nonjudgmental advice was a throughline in her nominations. “Beyond her scientific mentorship, what truly sets Erin apart is her thoughtful and honest guidance around career development and life beyond work,” one wrote. Another nominator highlighted their positive relationship, writing, “I feel comfortable sharing my concerns and challenges with her, knowing that I will be met with understanding, insightful advice, and unwavering support.”
Kara’s research group is focused on understanding the physics behind how black holes grow and affect their environments. Kara has advanced a new technique called X-ray reverberation mapping, which allows astronomers to map the gas falling on to black holes and measure the effects of strongly curved spacetime close to the event horizon.
“I feel like postdocs hold a really special place in our research groups because they come with their own expertise,” says Kara. “I’ve hired them particularly because I want to learn and grow from them as well, and hopefully vice versa.” Kara focuses her mentorship on providing for autonomy, giving postdocs their own mentorship opportunities, and treating them like colleagues.
A postdoc appointment “is this really pivotal time in your career, when you’re figuring out what it is you want to do with the rest of your life,” she says. “So if I can help postdocs navigate that by giving them some support, but also giving them independence to be able to take their next steps, that feels incredibly valuable.”
“I just feel like they make my work/life so rich, and it’s not a hard thing to mentor them because they all are such awesome people and they make our research group really fun.”
From nanoscale to global scale: Advancing MIT’s special initiatives in manufacturing, health, and climateMIT.nano cleanroom complex named after Robert Noyce PhD ’53 at the 2025 Nano Summit.“MIT.nano is essential to making progress in high-priority areas where I believe that MIT has a responsibility to lead,” opened MIT president Sally Kornbluth at the 2025 Nano Summit. “If we harness our collective efforts, we can make a serious positive impact.”
It was these collective efforts that drove discussions at the daylong event hosted by MIT.nano and focused on the importance of nanoscience and nanotechnology across MIT's special initiatives — projects deemed critical to MIT’s mission to help solve the world’s greatest challenges. With each new talk, common themes were reemphasized: collaboration across fields, solutions that can scale up from lab to market, and the use of nanoscale science to enact grand-scale change.
“MIT.nano has truly set itself apart, in the Institute's signature way, with an emphasis on cross-disciplinary collaboration and open access,” said Kornbluth. “Today, you're going to hear about the transformative impact of nanoscience and nanotechnology, and how working with the very small can help us do big things for the world together.”
Collaborating on health
Angela Koehler, faculty director of the MIT Health and Life Sciences Collaborative (MIT HEALS) and the Charles W. and Jennifer C. Johnson Professor of Biological Engineering, opened the first session with a question: How can we build a community across campus to tackle some of the most transformative problems in human health? In response, three speakers shared their work enabling new frontiers in medicine.
Ana Jaklenec, principal research scientist at the Koch Institute for Integrative Cancer Research, spoke about single-injection vaccines, and how her team looked to the techniques used in fabrication of electrical engineering components to see how multiple pieces could be packaged into a tiny device. “MIT.nano was instrumental in helping us develop this technology,” she said. “We took something that you can do in microelectronics and the semiconductor industry and brought it to the pharmaceutical industry.”
While Jaklenec applied insight from electronics to her work in health care, Giovanni Traverso, the Karl Van Tassel Career Development Professor of Mechanical Engineering, who is also a gastroenterologist at Brigham and Women’s Hospital, found inspiration in nature, studying the cephalopod squid and remora fish to design ingestible drug delivery systems. Representing the industry side of life sciences, Mirai Bio senior vice president Jagesh Shah SM ’95, PhD ’99 presented his company’s precision-targeted lipid nanoparticles for therapeutic delivery. Shah, as well as the other speakers, emphasized the importance of collaboration between industry and academia to make meaningful impact, and the need to strengthen the pipeline for young scientists.
Manufacturing, from the classroom to the workforce
Paving the way for future generations was similarly emphasized in the second session, which highlighted MIT’s Initiative for New Manufacturing (MIT INM). “MIT’s dedication to manufacturing is not only about technology research and education, it’s also about understanding the landscape of manufacturing, domestically and globally,” said INM co-director A. John Hart, the Class of 1922 Professor and head of the Department of Mechanical Engineering. “It’s about getting people — our graduates who are budding enthusiasts of manufacturing — out of campus and starting and scaling new companies,” he said.
On progressing from lab to market, Dan Oran PhD ’21 shared his career trajectory from technician to PhD student to founding his own company, Irradiant Technologies. “How are companies like Dan’s making the move from the lab to prototype to pilot production to demonstration to commercialization?” asked the next speaker, Elisabeth Reynolds, professor of the practice in urban studies and planning at MIT. “The U.S. capital market has not historically been well organized for that kind of support.” She emphasized the challenge of scaling innovations from prototype to production, and the need for workforce development.
“Attracting and retaining workforce is a major pain point for manufacturing businesses,” agreed John Liu, principal research scientist in mechanical engineering at MIT. To keep new ideas flowing from the classroom to the factory floor, Liu proposes a new worker type in advanced manufacturing — the technologist — someone who can be a bridge to connect the technicians and the engineers.
Bridging ecosystems with nanoscience
Bridging people, disciplines, and markets to affect meaningful change was also emphasized by Benedetto Marelli, mission director for the MIT Climate Project and associate professor of civil and environmental engineering at MIT.
“If we’re going to have a tangible impact on the trajectory of climate change in the next 10 years, we cannot do it alone,” he said. “We need to take care of ecology, health, mobility, the built environment, food, energy, policies, and trade and industry — and think about these as interconnected topics.”
Faculty speakers in this session offered a glimpse of nanoscale solutions for climate resiliency. Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering, presented his group’s work on using nanoparticles to turn waste methane and urea into renewable materials. Desirée Plata, the School of Engineering Distinguished Climate and Energy Professor, spoke about scaling carbon dioxide removal systems. Mechanical engineering professor Kripa Varanasi highlighted, among other projects, his lab’s work on improving agricultural spraying so pesticides adhere to crops, reducing agricultural pollution and cost.
In all of these presentations, the MIT faculty highlighted the tie between climate and the economy. “The economic systems that we have today are depleting to our resources, inherently polluting,” emphasized Plata. “The goal here is to use sustainable design to transition the global economy.”
What do people do at MIT.nano?
This is where MIT.nano comes in, offering shared access facilities where researchers can design creative solutions to these global challenges. “What do people do at MIT.nano?” asked associate director for Fab.nano Jorg Scholvin ’00, MNG ’01, PhD ’06 in the session on MIT.nano’s ecosystem. With 1,500 individuals and over 20 percent of MIT faculty labs using MIT.nano, it’s a difficult question to quickly answer. However, in a rapid-fire research showcase, students and postdocs gave a response that spanned 3D transistors and quantum devices to solar solutions and art restoration. Their work reflects the challenges and opportunities shared at the Nano Summit: developing technologies ready to scale, uniting disciplines to tackle complex problems, and gaining hands-on experience that prepares them to contribute to the future of hard tech.
The researchers’ enthusiasm carried the excitement and curiosity that President Kornbluth mentioned in her opening remarks, and that many faculty emphasized throughout the day. “The solutions to the problems we heard about today may come from inventions that don't exist yet,” said Strano. “These are some of the most creative people, here at MIT. I think we inspire each other.”
Robert N. Noyce (1953) Cleanroom at MIT.nano
Collaborative inspiration is not new to the MIT culture. The Nano Summit sessions focused on where we are today, and where we might be going in the future, but also reflected on how we arrived at this moment. Honoring visionaries of nanoscience and nanotechnology, President Emeritus L. Rafael Reif delivered the closing remarks and an exciting announcement — the dedication of the MIT.nano cleanroom complex. Made possible through a gift by Ray Stata SB ’57, SM ’58, this research space, 45,000 square feet of ISO 5, 6, and 7 cleanrooms, will be named the Robert N. Noyce (1953) Cleanroom.
“Ray Stata was — and is — the driving force behind nanoscale research at MIT,” said Reif. “I want to thank Ray, whose generosity has allowed MIT to honor Robert Noyce in such a fitting way.”
Ray Stata co-founded Analog Devices in 1965, and Noyce co-founded Fairchild Semiconductor in 1957, and later Intel in 1968. Noyce, widely regarded as the “Mayor of Silicon Valley,” became chair of the Semiconductor Industry Association in 1977, and over the next 40 years, semiconductor technology advanced a thousandfold, from micrometers to nanometers.
“Noyce was a pioneer of the semiconductor industry,” said Stata. “It is due to his leadership and remarkable contributions that electronics technology is where it is today. It is an honor to be able to name the MIT.nano cleanroom after Bob Noyce, creating a permanent tribute to his vision and accomplishments in the heart of the MIT campus.”
To conclude his remarks and the 2025 Nano Summit, Reif brought the nano journey back to today, highlighting technology giants such as Lisa Su ’90, SM ’91, PhD ’94, for whom Building 12, the home of MIT.nano, is named. “MIT has educated a large number of remarkable leaders in the semiconductor space,” said Reif. “Now, with the Robert Noyce Cleanroom, this amazing MIT community is ready to continue to shape the future with the next generation of nano discoveries — and the next generation of nano leaders, who will become living legends in their own time.”
Leading quantum at an inflection pointThe MIT Quantum Initiative is taking shape, leveraging quantum breakthroughs to drive the future of scientific and technological progress.Danna Freedman is seeking the early adopters.
She is the faculty director of the nascent MIT Quantum Initiative, or QMIT. In this new role, Freedman is giving shape to an ambitious, Institute-wide effort to apply quantum breakthroughs to the most consequential challenges in science, technology, industry, and national security.
The interdisciplinary endeavor, the newest of MIT President Sally Kornbluth’s strategic initiatives, will bring together MIT researchers and domain experts from a range of industries to identify and tackle practical challenges wherever quantum solutions could achieve the greatest impact.
“We’ve already seen how the breadth of progress in quantum has created opportunities to rethink the future of security and encryption, imagine new modes of navigation, and even measure gravitational waves more precisely to observe the cosmos in an entirely new way,” says Freedman, the Frederick George Keyes Professor of Chemistry. “What can we do next? We’re investing in the promise of quantum, and where the legacy will be in 20 years.”
QMIT — the name is a nod to the “qubit,” the basic unit of quantum information — will formally launch on Dec. 8 with an all-day event on campus. Over time, the initiative plans to establish a physical home in the heart of campus for academic, public, and corporate engagement with state-of-the-art integrated quantum systems. Beyond MIT’s campus, QMIT will also work closely with the U.S. government and MIT Lincoln Laboratory, applying the lab’s capabilities in quantum hardware development, systems engineering, and rapid prototyping to national security priorities.
“The MIT Quantum Initiative seizes a timely opportunity in service to the nation’s scientific, economic, and technological competitiveness,” says Ian A. Waitz, MIT’s vice president for research. “With quantum capabilities approaching an inflection point, QMIT will engage students and researchers across all our schools and the college, as well as companies around the world, in thinking about what a step change in sensing and computational power will mean for a wide range of fields. Incredible opportunities exist in health and life sciences, fundamental physics research, cybersecurity, materials science, sensing the world around us, and more.”
Identifying the right questions
Quantum phenomena are as foundational to our world as light or gravity. At an extremely small scale, the interactions of atoms and subatomic particles are controlled by a different set of rules than the physical laws of the macro-sized world. These rules are called quantum mechanics.
“Quantum, in a sense, is what underlies everything,” says Freedman.
By leveraging quantum properties, quantum devices can process information at incredible speed to solve complex problems that aren’t feasible on classical supercomputers, and to enable ultraprecise sensing and measurement. Those improvements in speed and precision will become most powerful when optimized in relation to specific use cases, and as part of a complete quantum system. QMIT will focus on collaboration across domains to co-develop quantum tools, such as computers, sensors, networks, simulations, and algorithms, alongside the intended users of these systems.
As it develops, QMIT will be organized into programmatic pillars led by top researchers in quantum including Paola Cappellaro, Ford Professor of Engineering and professor of nuclear science and engineering and of physics; Isaac Chuang, Julius A. Stratton Professor in Electrical Engineering and Physics; Pablo Jarillo-Herrero, Cecil and Ida Green Professor of Physics; William Oliver, Henry Ellis Warren (1894) Professor of Electrical Engineering and Computer Science and professor of physics; Vladan Vuletić, Lester Wolfe Professor of Physics; and Jonilyn Yoder, associate leader of the Quantum-Enabled Computation Group at MIT Lincoln Laboratory.
While supporting the core of quantum research in physics, engineering, mathematics, and computer science, QMIT promises to expand the community at its frontiers, into astronomy, biology, chemistry, materials science, and medicine.
“If you provide a foundation that somebody can integrate with, that accelerates progress a lot,” says Freedman. “Perhaps we want to figure out how a quantum simulator we’ve built can model photosynthesis, if that’s the right question — or maybe the right question is to study 10 failed catalysts to see why they failed.”
“We are going to figure out what real problems exist that we could approach with quantum tools, and work toward them in the next five years,” she adds. “We are going to change the forward momentum of quantum in a way that supports impact.”
The MIT Quantum Initiative will be administratively housed in the Research Laboratory of Electronics (RLE), with support from the Office of the Vice President for Research (VPR) and the Office of Innovation and Strategy.
QMIT is a natural expansion of MIT’s Center for Quantum Engineering (CQE), a research powerhouse that engages more than 80 principal investigators across the MIT campus and Lincoln Laboratory to accelerate the practical application of quantum technologies.
“CQE has cultivated a tremendously strong ecosystem of students and researchers, engaging with U.S. government sponsors and industry collaborators, including through the popular Quantum Annual Research Conference (QuARC) and professional development classes,” says Marc Baldo, the Dugald C. Jackson Professor in Electrical Engineering and director of RLE.
“With the backing of former vice president for research Maria Zuber, former Lincoln Lab director Eric Evans, and Marc Baldo, we launched CQE and its industry membership group in 2019 to help bridge MIT’s research efforts in quantum science and engineering,” says Oliver, CQE’s director, who also spent 20 years at Lincoln Laboratory, most recently as a Laboratory Fellow. “We have an important opportunity now to deepen our commitment to quantum research and education, and especially in engaging students from across the Institute in thinking about how to leverage quantum science and engineering to solve hard problems.”
Two years ago, Peter Fisher, the Thomas A. Frank (1977) Professor of Physics, in his role as associate vice president for research computing and data, assembled a faculty group led by Cappellaro and involving Baldo, Oliver, Freedman, and others, to begin to build an initiative that would span the entire Institute. Now, capitalizing on CQE’s success, Oliver will lead the new MIT Quantum Initiative’s quantum computing pillar, which will broaden the work of CQE into a larger effort that focuses on quantum computing, industry engagement, and connecting with end users.
The “MIT-hard” problem
QMIT will build upon the Institute’s historic leadership in quantum science and engineering. In the spring of 1981, MIT hosted the first Physics of Computation Conference at the Endicott House, bringing together nearly 50 physics and computing researchers to consider the practical promise of quantum — an intellectual moment that is now widely regarded as the kickoff of the second quantum revolution. (The first was the fundamental articulation of quantum mechanics 100 years ago.)
Today, research in quantum science and engineering produces a steady stream of “firsts” in the lab and a growing number of startup companies.
In collaboration with partners in industry and government, MIT researchers develop advances in areas like quantum sensing, which involves the use of atomic-scale systems to measure certain properties, like distance and acceleration, with extreme precision. Quantum sensing could be used in applications like brain imaging devices that capture more detail, or air traffic control systems with greater positional accuracy.
Another key area of research is quantum simulation, which uses the power of quantum computers to accurately emulate complex systems. This could fuel the discovery of new materials for energy-efficient electronics or streamline the identification of promising molecules for drug development.
“Historically, when we think about the most well-articulated challenges that quantum will solve,” Freedman says, “the best ones have come from inside of MIT. We’re open to technological solutions to problems, and nontraditional approaches to science. In many respects, we are the early adopters.”
But she also draws a sharp distinction between blue-sky thinking about what quantum might do, and the deeply technical, deeply collaborative work of actually drawing the roadmap. “That’s the ‘MIT-hard’ problem,” she says.
The QMIT launch event on Dec. 8 will feature talks and discussions featuring MIT faculty, including Nobel laureates and industry leaders.
MIT physicists observe key evidence of unconventional superconductivity in magic-angle grapheneThe findings could open a route to new forms of higher-temperature superconductors.Superconductors are like the express trains in a metro system. Any electricity that “boards” a superconducting material can zip through it without stopping and losing energy along the way. As such, superconductors are extremely energy efficient, and are used today to power a variety of applications, from MRI machines to particle accelerators.
But these “conventional” superconductors are somewhat limited in terms of uses because they must be brought down to ultra-low temperatures using elaborate cooling systems to keep them in their superconducting state. If superconductors could work at higher, room-like temperatures, they would enable a new world of technologies, from zero-energy-loss power cables and electricity grids to practical quantum computing systems. And so scientists at MIT and elsewhere are studying “unconventional” superconductors — materials that exhibit superconductivity in ways that are different from, and potentially more promising than, today’s superconductors.
In a promising breakthrough, MIT physicists have today reported their observation of new key evidence of unconventional superconductivity in “magic-angle” twisted tri-layer graphene (MATTG) — a material that is made by stacking three atomically-thin sheets of graphene at a specific angle, or twist, that then allows exotic properties to emerge.
MATTG has shown indirect hints of unconventional superconductivity and other strange electronic behavior in the past. The new discovery, reported in the journal Science, offers the most direct confirmation yet that the material exhibits unconventional superconductivity.
In particular, the team was able to measure MATTG’s superconducting gap — a property that describes how resilient a material’s superconducting state is at given temperatures. They found that MATTG’s superconducting gap looks very different from that of the typical superconductor, meaning that the mechanism by which the material becomes superconductive must also be different, and unconventional.
“There are many different mechanisms that can lead to superconductivity in materials,” says study co-lead author Shuwen Sun, a graduate student in MIT’s Department of Physics. “The superconducting gap gives us a clue to what kind of mechanism can lead to things like room-temperature superconductors that will eventually benefit human society.”
The researchers made their discovery using a new experimental platform that allows them to essentially “watch” the superconducting gap, as the superconductivity emerges in two-dimensional materials, in real-time. They plan to apply the platform to further probe MATTG, and to map the superconducting gap in other 2D materials — an effort that could reveal promising candidates for future technologies.
“Understanding one unconventional superconductor very well may trigger our understanding of the rest,” says Pablo Jarillo-Herrero, the Cecil and Ida Green Professor of Physics at MIT and a member of the Research Laboratory of Electronics. “This understanding may guide the design of superconductors that work at room temperature, for example, which is sort of the Holy Grail of the entire field.”
The study’s other co-lead author is Jeong Min Park PhD ’24; Kenji Watanabe and Takashi Taniguchi of the National Institute for Materials Science in Japan are also co-authors.
The ties that bind
Graphene is a material that comprises a single layer of carbon atoms that are linked in a hexagonal pattern resembling chicken wire. A sheet of graphene can be isolated by carefully exfoliating an atom-thin flake from a block of graphite (the same stuff of pencil lead). In the 2010s, theorists predicted that if two graphene layers were stacked at a very special angle, the resulting structure should be capable of exotic electronic behavior.
In 2018, Jarillo-Herrero and his colleagues became the first to produce magic-angle graphene in experiments, and to observe some of its extraordinary properties. That discovery sprouted an entire new field known as “twistronics,” and the study of atomically thin, precisely twisted materials. Jarillo-Herrero’s group has since studied other configurations of magic-angle graphene with two, three, and more layers, as well as stacked and twisted structures of other two-dimensional materials. Their work, along with other groups, have revealed some signatures of unconventional superconductivity in some structures.
Superconductivity is a state that a material can exhibit under certain conditions (usually at very low temperatures). When a material is a superconductor, any electrons that pass through can pair up, rather than repelling and scattering away. When they couple up in what is known as “Cooper pairs,” the electrons can glide through a material without friction, instead of knocking against each other and flying away as lost energy. This pairing up of electrons is what enables superconductivity, though the way in which they are bound can vary.
“In conventional superconductors, the electrons in these pairs are very far away from each other, and weakly bound,” says Park. “But in magic-angle graphene, we could already see signatures that these pairs are very tightly bound, almost like a molecule. There were hints that there is something very different about this material.”
Tunneling through
In their new study, Jarillo-Herrero and his colleagues aimed to directly observe and confirm unconventional superconductivity in a magic-angle graphene structure. To do so, they would have to measure the material’s superconducting gap.
“When a material becomes superconducting, electrons move together as pairs rather than individually, and there’s an energy ‘gap’ that reflects how they’re bound,” Park explains. “The shape and symmetry of that gap tells us the underlying nature of the superconductivity.”
Scientists have measured the superconducting gap in materials using specialized techniques, such as tunneling spectroscopy. The technique takes advantage of a quantum mechanical property known as “tunneling.” At the quantum scale, an electron behaves not just as a particle, but also as a wave; as such, its wave-like properties enable an electron to travel, or “tunnel,” through a material, as if it could move through walls.
Such tunneling spectroscopy measurements can give an idea of how easy it is for an electron to tunnel into a material, and in some sense, how tightly packed and bound the electrons in the material are. When performed in a superconducting state, it can reflect the properties of the superconducting gap. However, tunneling spectroscopy alone cannot always tell whether the material is, in fact, in a superconducting state. Directly linking a tunneling signal to a genuine superconducting gap is both essential and experimentally challenging.
In their new work, Park and her colleagues developed an experimental platform that combines electron tunneling with electrical transport — a technique that is used to gauge a material’s superconductivity, by sending current through and continuously measuring its electrical resistance (zero resistance signals that a material is in a superconducting state).
The team applied the new platform to measure the superconducting gap in MATTG. By combining tunneling and transport measurements in the same device, they could unambiguously identify the superconducting tunneling gap, one that appeared only when the material exhibited zero electrical resistance, which is the hallmark of superconductivity. They then tracked how this gap evolved under varying temperature and magnetic fields. Remarkably, the gap displayed a distinct V-shaped profile, which was clearly different from the flat and uniform shape of conventional superconductors.
This V shape reflects a certain unconventional mechanism by which electrons in MATTG pair up to superconduct. Exactly what that mechanism is remains unknown. But the fact that the shape of the superconducting gap in MATTG stands out from that of the typical superconductor provides key evidence that the material is an unconventional superconductor.
In conventional superconductors, electrons pair up through vibrations of the surrounding atomic lattice, which effectively jostle the particles together. But Park suspects that a different mechanism could be at work in MATTG.
“In this magic-angle graphene system, there are theories explaining that the pairing likely arises from strong electronic interactions rather than lattice vibrations,” she posits. “That means electrons themselves help each other pair up, forming a superconducting state with special symmetry.”
Going forward, the team will test other two-dimensional twisted structures and materials using the new experimental platform.
“This allows us to both identify and study the underlying electronic structures of superconductivity and other quantum phases as they happen, within the same sample,” Park says. “This direct view can reveal how electrons pair and compete with other states, paving the way to design and control new superconductors and quantum materials that could one day power more efficient technologies or quantum computers.”
This research was supported, in part, by the U.S. Army Research Office, the U.S. Air Force Office of Scientific Research, the MIT/MTL Samsung Semiconductor Research Fund, the Sagol WIS-MIT Bridge Program, the National Science Foundation, the Gordon and Betty Moore Foundation, and the Ramon Areces Foundation.
MIT researchers invent new human brain model to enable disease research, drug discoveryCultured from induced pluripotent stem cells, “miBrains” integrate all major brain cell types and model brain structures, cellular interactions, activity, and pathological features.A new 3D human brain tissue platform developed by MIT researchers is the first to integrate all major brain cell types, including neurons, glial cells, and the vasculature, into a single culture.
Grown from individual donors’ induced pluripotent stem cells, these models — dubbed Multicellular Integrated Brains (miBrains) — replicate key features and functions of human brain tissue, are readily customizable through gene editing, and can be produced in quantities that support large-scale research.
Although each unit is smaller than a dime, miBrains may be worth a great deal to researchers and drug developers who need more complex living lab models to better understand brain biology and treat diseases.
“The miBrain is the only in vitro system that contains all six major cell types that are present in the human brain,” says Li-Huei Tsai, Picower Professor, director of The Picower Institute for Learning and Memory, and a senior author of the open-access study describing miBrains, published Oct. 17 in the Proceedings of the National Academy of Sciences.
“In their first application, miBrains enabled us to discover how one of the most common genetic markers for Alzheimer’s disease alters cells’ interactions to produce pathology,” she adds.
Tsai’s co-senior authors are Robert Langer, David H. Koch (1962) Institute Professor, and Joel Blanchard, associate professor in the Icahn School of Medicine at Mt. Sinai in New York, and a former Tsai Laboratory postdoc. The study is led by Alice Stanton, former postdoc in the Langer and Tsai labs and now assistant professor at Harvard Medical School and Massachusetts General Hospital, and Adele Bubnys, a former Tsai lab postdoc and current senior scientist at Arbor Biotechnologies.
Benefits from two kinds of models
The more closely a model recapitulates the brain’s complexity, the better suited it is for extrapolating how human biology works and how potential therapies may affect patients. In the brain, neurons interact with each other and with various helper cells, all of which are arranged in a three-dimensional tissue environment that includes blood vessels and other components. All of these interactions are necessary for health, and any of them can contribute to disease.
Simple cultures of just one or a few cell types can be created in quantity relatively easily and quickly, but they cannot tell researchers about the myriad interactions that are essential to understanding health or disease. Animal models embody the brain’s complexity, but can be difficult and expensive to maintain, slow to yield results, and different enough from humans to yield occasionally divergent results.
MiBrains combine advantages from each type of model, retaining much of the accessibility and speed of lab-cultured cell lines while allowing researchers to obtain results that more closely reflect the complex biology of human brain tissue. Moreover, they are derived from individual patients, making them personalized to an individual’s genome. In the model, the six cell types self-assemble into functioning units, including blood vessels, immune defenses, and nerve signal conduction, among other features. Researchers ensured that miBrains also possess a blood-brain-barrier capable of gatekeeping which substances may enter the brain, including most traditional drugs.
“The miBrain is very exciting as a scientific achievement,” says Langer. “Recent trends toward minimizing the use of animal models in drug development could make systems like this one increasingly important tools for discovering and developing new human drug targets.”
Two ideal blends for functional brain models
Designing a model integrating so many cell types presented challenges that required many years to overcome. Among the most crucial was identifying a substrate able to provide physical structure for cells and support their viability. The research team drew inspiration from the environment that surrounds cells in natural tissue, the extracellular matrix (ECM). The miBrain’s hydrogel-based “neuromatrix” mimics the brain’s ECM with a custom blend of polysaccharides, proteoglycans, and basement membrane that provide a scaffold for all the brain’s major cell types while promoting the development of functional neurons.
A second blend would also prove critical: the proportion of cells that would result in functional neurovascular units. The actual ratios of cell types have been a matter of debate for the last several decades, with even the more advanced methodologies providing only rough brushstrokes for guidance, for example 45-75 percent for oligodendroglia of all cells or 19-40 percent for astrocytes.
The researchers developed the six cell types from patient-donated induced pluripotent stem cells, verifying that each cultured cell type closely recreated naturally-occurring brain cells. Then, the team experimentally iterated until they hit on a balance of cell types that resulted in functional, properly structured neurovascular units. This laborious process would turn out to be an advantageous feature of miBrains: because cell types are cultured separately, they can each be genetically edited so that the resulting model is tailored to replicate specific health and disease states.
“Its highly modular design sets the miBrain apart, offering precise control over cellular inputs, genetic backgrounds, and sensors — useful features for applications such as disease modeling and drug testing,” says Stanton.
Alzheimer’s discovery using miBrain
To test miBrain’s capabilities, the researchers embarked on a study of the gene variant APOE4, which is the strongest genetic predictor for the development of Alzheimer’s disease. Although one brain cell type, astrocytes, are known to be a primary producer of the APOE protein, the role that astrocytes carrying the APOE4 variant play in disease pathology is poorly understood.
MiBrains were well-suited to the task for two reasons. First of all, they integrate astrocytes with the brain’s other cell types, so that their natural interactions with other cells can be mimicked. Second, because the platform allowed the team to integrate cell types individually, APOE4 astrocytes could be studied in cultures where all other cell types carried APOE3, a gene variant that does not increase Alzheimer’s risk. This enabled the researchers to isolate the contribution APOE4 astrocytes make to pathology.
In one experiment, the researchers examined APOE4 astrocytes cultured alone, versus ones in APOE4 miBrains. They found that only in the miBrains did the astrocytes express many measures of immune reactivity associated with Alzheimer’s disease, suggesting the multicellular environment contributes to that state.
The researchers also tracked the Alzheimer’s-associated proteins amyloid and phosphorylated tau, and found all-APOE4 miBrains accumulated them, whereas all-APOE3 miBrains did not, as expected. However, in APOE3 miBrains with APOE4 astrocytes, they found that APOE4 miBrains still exhibited amyloid and tau accumulation.
Then the team dug deeper into how APOE4 astrocytes’ interactions with other cell types might lead to their contribution to disease pathology. Prior studies have implicated molecular cross-talk with the brain’s microglia immune cells. Notably, when the researchers cultured APOE4 miBrains without microglia, their production of phosphorylated tau was significantly reduced. When the researchers dosed APOE4 miBrains with culture media from astrocytes and microglia combined, phosphorylated tau increased, whereas when they dosed them with media from cultures of astrocytes or microglia alone, the tau production did not increase. The results therefore provided new evidence that molecular cross-talk between microglia and astrocytes is indeed required for phosphorylated tau pathology.
In the future, the research team plans to add new features to miBrains to more closely model characteristics of working brains, such as leveraging microfluidics to add flow through blood vessels, or single-cell RNA sequencing methods to improve profiling of neurons.
Researchers expect that miBrains could advance research discoveries and treatment modalities for Alzheimer’s disease and beyond.
“Given its sophistication and modularity, there are limitless future directions,” says Stanton. “Among them, we would like to harness it to gain new insights into disease targets, advanced readouts of therapeutic efficacy, and optimization of drug delivery vehicles.”
“I’m most excited by the possibility to create individualized miBrains for different individuals,” adds Tsai. “This promises to pave the way for developing personalized medicine.”
Funding for the study came from the BT Charitable Foundation, Freedom Together Foundation, the Robert A. and Renee E. Belfer Family, Lester A. Gimpelson, Eduardo Eurnekian, Kathleen and Miguel Octavio, David B. Emmes, the Halis Family, the Picower Institute, and an anonymous donor.
A new way to understand and predict gene splicingThe KATMAP model, developed by researchers in the Department of Biology, can predict alternative cell splicing, which allows cells to create endless diversity from the same sets of genetic blueprints.Although heart cells and skin cells contain identical instructions for creating proteins encoded in their DNA, they’re able to fill such disparate niches because molecular machinery can cut out and stitch together different segments of those instructions to create endlessly unique combinations.
The ingenuity of using the same genes in different ways is made possible by a process called splicing and is controlled by splicing factors; which splicing factors a cell employs determines what sets of instructions that cell produces, which, in turn, gives rise to proteins that allow cells to fulfill different functions.
In an open-access paper published today in Nature Biotechnology, researchers in the MIT Department of Biology outlined a framework for parsing the complex relationship between sequences and splicing regulation to investigate the regulatory activities of splicing factors, creating models that can be applied to interpret and predict splicing regulation across different cell types, and even different species. Called Knockdown Activity and Target Models from Additive regression Predictions, KATMAP draws on experimental data from disrupting the expression of a splicing factor and information on which sequences the splicing factor interacts with to predict its likely targets.
Aside from the benefits of a better understanding of gene regulation, splicing mutations — either in the gene that is spliced or in the splicing factor itself — can give rise to diseases such as cancer by altering how genes are expressed, leading to the creation or accumulation of faulty or mutated proteins. This information is critical for developing therapeutic treatments for those diseases. The researchers also demonstrated that KATMAP can potentially be used to predict whether synthetic nucleic acids, a promising treatment option for disorders including a subset of muscular atrophy and epilepsy disorders, affect splicing.
Perturbing splicing
In eukaryotic cells, including our own, splicing occurs after DNA is transcribed to produce an RNA copy of a gene, which contains both coding and non-coding regions of RNA. The noncoding intron regions are removed, and the coding exon segments are spliced back together to make a near-final blueprint, which can then be translated into a protein.
According to first author Michael P. McGurk, a postdoc in the lab of MIT Professor Christopher Burge, previous approaches could provide an average picture of regulation, but could not necessarily predict the regulation of splicing factors at particular exons in particular genes.
KATMAP draws on RNA sequencing data generated from perturbation experiments, which alter the expression level of a regulatory factor by either overexpressing it or knocking down its levels. The consequences of overexpression or knockdown are that the genes regulated by the splicing factor should exhibit different levels of splicing after perturbation, which helps the model identify the splicing factor’s targets.
Cells, however, are complex, interconnected systems, where one small change can cause a cascade of effects. KATMAP is also able to distinguish between direct targets from indirect, downstream impacts by incorporating known information about the sequence the splicing factor is likely to interact with, referred to as a binding site or binding motif.
“In our analyses, we identify predicted targets as exons that have binding sites for this particular factor in the regions where this model thinks they need to be to impact regulation,” McGurk says, while non-targets may be affected by perturbation but don’t have the likely appropriate binding sites nearby.
This is especially helpful for splicing factors that aren’t as well-studied.
“One of our goals with KATMAP was to try to make the model general enough that it can learn what it needs to assume for particular factors, like how similar the binding site has to be to the known motif or how regulatory activity changes with the distance of the binding sites from the splice sites,” McGurk says.
Starting simple
Although predictive models can be very powerful at presenting possible hypotheses, many are considered “black boxes,” meaning the rationale that gives rise to their conclusions is unclear. KATMAP, on the other hand, is an interpretable model that enables researchers to quickly generate hypotheses and interpret splicing patterns in terms of regulatory factors while also understanding how the predictions were made.
“I don’t just want to predict things, I want to explain and understand,” McGurk says. “We set up the model to learn from existing information about splicing and binding, which gives us biologically interpretable parameters.”
The researchers did have to make some simplifying assumptions in order to develop the model. KATMAP considers only one splicing factor at a time, although it is possible for splicing factors to work in concert with one another. The RNA target sequence could also be folded in such a way that the factor wouldn’t be able to access a predicted binding site, so the site is present but not utilized.
“When you try to build up complete pictures of complex phenomena, it’s usually best to start simple,” McGurk says. “A model that only considers one splicing factor at a time is a good starting point.”
David McWaters, another postdoc in the Burge Lab and a co-author on the paper, conducted key experiments to test and validate that aspect of the KATMAP model.
Future directions
The Burge lab is collaborating with researchers at Dana-Farber Cancer Institute to apply KATMAP to the question of how splicing factors are altered in disease contexts, as well as with other researchers at MIT as part of an MIT HEALS grant to model splicing factor changes in stress responses. McGurk also hopes to extend the model to incorporate cooperative regulation for splicing factors that work together.
“We’re still in a very exploratory phase, but I would like to be able to apply these models to try to understand splicing regulation in disease or development. In terms of variation of splicing factors, they are related, and we need to understand both,” McGurk says.
Burge, the Uncas (1923) and Helen Whitaker Professor and senior author of the paper, will continue to work on generalizing this approach to build interpretable models for other aspects of gene regulation.
“We now have a tool that can learn the pattern of activity of a splicing factor from types of data that can be readily generated for any factor of interest,” says Burge, who is also an extra-mural member of the Koch Institute for Integrative Cancer Research and an associate member of the Broad Institute of MIT and Harvard. “As we build up more of these models, we’ll be better able to infer which splicing factors have altered activity in a disease state from transcriptomic data, to help understand which splicing factors are driving pathology.”
Startup provides a nontechnical gateway to coding on quantum computersCo-founded by Kanav Setia and Jason Necaise ’20, qBraid lets users access the most popular quantum devices and software programs on an intuitive, cloud-based platform.Quantum computers have the potential to model new molecules and weather patterns better than any computer today. They may also one day accelerate artificial intelligence algorithms at a much lower energy footprint. But anyone interested in using quantum computers faces a steep learning curve that starts with getting access to quantum devices and then figuring out one of the many quantum software programs on the market.
Now qBraid, founded by a team including Kanav Setia and Jason Necaise ’20, is providing a gateway to quantum computing with a platform that gives users access to the leading quantum devices and software. Users can log on to qBraid’s cloud-based interface and connect with quantum devices and other computing resources from leading companies like Nvidia, Microsoft, and IBM. In a few clicks, they can start coding or deploy cutting-edge software that works across devices.
“The mission is to take you from not knowing anything about quantum computing to running your first program on these amazing machines in less than 10 minutes,” Setia says. “We’re a one-stop platform that gives access to everything the quantum ecosystem has to offer. Our goal is to enable anyone — whether they’re enterprise customers, academics, or individual users — to build and ultimately deploy applications.”
Since its founding in June of 2020, qBraid has helped more than 20,000 people in more than 120 countries deploy code on quantum devices. That traction is ultimately helping to drive innovation in a nascent industry that’s expected to play a key role in our future.
“This lowers the barrier to entry for a lot of newcomers,” Setia says. “They can be up and running in a few minutes instead of a few weeks. That’s why we’ve gotten so much adoption around the world. We’re one of the most popular platforms for accessing quantum software and hardware.”
A quantum “software sandbox”
Setia met Necaise while the two interned at IBM. At the time, Necaise was an undergraduate at MIT majoring in physics, while Setia was at Dartmouth College. The two enjoyed working together, and Necaise said if Setia ever started a company, he’d be interested in joining.
A few months later, Setia decided to take him up on the offer. The other co-founders of qBraid are Setia’s former Dartmouth classmates Jared Heath and Elliot Potter, Dartmouth Associate Professor James Whitfield, and Andrea Coladangelo, a postdoc at Berkeley at the time who is currently an assistant professor at the University of Washington.
At Dartmouth, Setia had taken one of the first applied quantum computing classes, but students spent weeks struggling to install all the necessary software programs before they could even start coding.
“We hadn’t even gotten close to developing any useful algorithms,” Seita said. “The idea for qBraid was, ‘Why don’t we build a software sandbox in the cloud and give people an easy programming setup out of the box?’ Connection with the hardware would already be done.”
The founders received early support from the MIT Sandbox Innovation Fund and took part in the delta v summer startup accelerator run by the Martin Trust Center for MIT Entrepreneurship.
“Both programs provided us with very strong mentorship,” Setia says. “They give you frameworks on what a startup should look like, and they bring in some of the smartest people in the world to mentor you — people you’d never have access to otherwise.”
Necaise and the other co-founders left the company in 2021 and 2022. Setia, meanwhile, continued to find problems with quantum software outside of the classroom.
“This is a massive bottleneck,” Setia says. “I’d worked on several quantum software programs that pushed out updates or changes, and suddenly all hell broke loose on my codebase. I’d spend two to four weeks jostling with these updates that had almost nothing to do with the quantum algorithms I was working on.”
QBraid started as a platform with pre-installed software that let developers start writing code immediately. The company also added support for version-controlled quantum software so developers could build applications on top without worrying about changes. Over time, qBraid added connections to quantum computers and tools that lets quantum programs run across different devices.
“The pitch was you don’t need to manage a bunch of software or a whole bunch of cloud accounts,” Setia says. “We’re a single platform: the quantum cloud.”
QBraid also launched qBook, a learning platform that offers interactive courses in quantum computing.
“If you see a piece of code you like, you just click play and the code runs,” Setia says. “You can run a whole bunch of code, modify it on the fly, and you can understand how it works. It runs on laptops, iPads, and phones. A significant portion of our users are from developing countries, and they’re developing applications from their phones.”
Democratizing quantum computing
Today qBraid’s 20,000 users come from over 400 universities and 100 companies around the world. As qBraid’s user base has grown, the company went from integrating quantum computers onto their platform from the outside to creating a quantum operating system, qBraid-OS, that is currently being used by four leading quantum companies.
“We are productizing these quantum computers,” Setia explains. “Many quantum companies are realizing they want to focus their energy completely on the hardware, with us productizing their infrastructure. We’re like the operating system for quantum computers.”
People are using qBraid to build quantum applications in AI and machine learning, to discover new molecules or develop new drugs, and to develop applications in finance and cybersecurity. With every new use case, Setia says qBraid is democratizing quantum computing to create the quantum workforce that will continue to advance the field.
“[In 2018], an article in The New York Times said there were possibly less than 1,000 people in the world that could be called experts in quantum programming,” Setia says. “A lot of people want to access these cutting-edge machines, but they don’t have the right software backgrounds. They are just getting started and want to play with algorithms. QBraid gives those people an easy programming setup out of the box.”
Q&A: How MITHIC is fostering a culture of collaboration at MITA presidential initiative, the MIT Human Insight Collaborative is supporting new interdisciplinary initiatives and projects across the Institute.The MIT Human Insight Collaborative (MITHIC) is a presidential initiative with a mission of elevating human-centered research and teaching and connecting scholars in the humanities, arts, and social sciences with colleagues across the Institute.
Since its launch in 2024, MITHIC has funded 31 projects led by teaching and research staff representing 22 different units across MIT. The collaborative is holding its annual event on Nov. 17.
In this Q&A, Keeril Makan, associate dean in the MIT School of Humanities, Arts, and Social Sciences, and Maria Yang, interim dean of the MIT School of Engineering, discuss the value of MITHIC and the ways it’s accelerating new research and collaborations across the Institute. Makan is the Michael (1949) Sonja Koerner Music Composition Professor and faculty lead for MITHIC. Yang is the William E. Leonhard (1940) Professor in the Department of Mechanical Engineering and co-chair of MITHIC’s SHASS+ Connectivity Fund.
Q: You each come from different areas of MIT. Looking at MITHIC from your respective roles, why is this initiative so important for the Institute?
Makan: The world is counting on MIT to develop solutions to some of the world’s greatest challenges, such as artificial intelligence, poverty, and health care. These are all issues that arise from human activity, a thread that runs through much of the research we’re focused on in SHASS. Through MITHIC, we’re embedding human-centered thinking and connecting the Institute’s top scholars in the work needed to find innovative ways of addressing these problems.
Yang: MITHIC is very important to MIT, and I think of this from the point of view as an engineer, which is my background. Engineers often think about the technology first, which is absolutely important. But for that technology to have real impact, you have to think about the human insights that make that technology relevant and can be deployed in the world. So really having a deep understanding of that is core to MITHIC and MIT’s engineering enterprise.
Q: How does MITHIC fit into MIT’s broader mission?
Makan: MITHIC highlights how the work we do in the School of Humanities, Arts, and Social Sciences is aligned with MIT’s mission, which is to address the world’s great problems. But MITHIC has also connected all of MIT in this endeavor. We have faculty from all five schools and the MIT Schwarzman College of Computing involved in evaluating MITHIC project proposals. Each of them represent a different point of view and are engaging with these projects that originate in SHASS, but actually cut across many different fields. Seeing their perspectives on these projects has been inspiring.
Yang: I think of MIT’s main mission as using technology and many other things to make impact in the world, especially social impact. The kind of interdisciplinary work that MITHIC catalyzes really enables all of that work to happen in a new and profound way. The SHASS+ Connectivity Fund, which connects SHASS faculty and researchers with colleagues outside of SHASS, has resulted in collaborations that were not possible before. One example is a project being led by professors Mark Rau, who has a shared appointment between Music and Electrical Engineering and Computer Science, and Antoine Allanore in Materials Science and Engineering. The two of them are looking at how they can take ancient unplayable instruments and recreate them using new technologies for scanning and fabrication. They’re also working with the Museum of Fine Arts, so it’s a whole new type of collaboration that exemplifies MITHIC.
Q: What has been the community response to MITHIC in its first year?
Makan: It’s been very strong. We found a lot of pent-up demand, both from faculty in SHASS and faculty in the sciences and engineering. Either there were preexisting collaborations that they could take to the next level through MITHIC, or there was the opportunity to meet someone new and talk to someone about a problem and how they could collaborate. MITHIC also hosted a series of Meeting of the Minds events, which are a chance to have faculty and members of the community get to know one another on a certain topic. This community building has been exciting, and led to an overwhelming number of applications last year. There has also been significant student involvement, with several projects bringing on UROPs [Undergraduate Research Opportunities Program projects] and PhD students to help with their research. MITHIC gives a real morale boost and a lot of hope that there is a focus upon building collaborations at MIT and on not forgetting that the world needs humanists, artists, and social scientists.
Yang: One faculty member told me the SHASS+ Connectivity Fund has given them hope for the kind of research that we do because of the cross collaboration. There’s a lot of excitement and enthusiasm for this type of work.
Q: The SHASS+ Connectivity Fund is designed to support interdisciplinary collaborations at MIT. What’s an example of a SHASS+ project that’s worked particularly well?
Makan: One exciting collaboration is between professors Jörn Dunkel in Mathematics and In Song Kim in Political science. In Song is someone who has done a lot of work on studying lobbying and its effect upon the legislative process. He met Jörn, I believe, at one of MIT’s daycare centers, so it’s a relationship that started in a very informal fashion. But they found they actually had ways of looking at math and quantitative analysis that could complement one another. Their work is creating a new subfield and taking the research in a direction that would not be possible without this funding.
Yang: One of the SHASS+ projects that I think is really interesting is between professors Marzyeh Ghassemi in Electrical Engineering and Computer Science and Esther Duflo in Economics. The two of them are looking at how they can use AI to help health diagnostics in low-resource global settings, where there isn’t a lot of equipment or technology to do basic health diagnostics. They can use handheld, low-cost equipment to do things like predict if someone is going to have a heart attack. And they are not only developing the diagnostic tool, but evaluating the fairness of the algorithm. The project is an excellent example of using a MITHIC grant to make impact in the world.
Q: What has been MITHIC’s impact in terms of elevating research and teaching within SHASS?
Makan: In addition to the SHASS+ Connectivity Fund, there are two other possibilities to help support both SHASS research as well as educational initiatives: the Humanities Cultivation Fund and the SHASS Education Innovation Fund. And both of these are providing funding in excess of what we normally see within SHASS. It both recognizes the importance of the work of our faculty and it also gives them the means to actually take ideas to a much further place.
One of the projects that MITHIC is helping to support is the Compass Initiative. Compass was started by Lily Tsai, one of our professors in Political Science, along with other faculty in SHASS to create essentially an introductory class to the different methodologies within SHASS. So we have philosophers, music historians, etc., all teaching together, all addressing how we interact with one another, what it means to be a good citizen, what it means to be socially aware and civically engaged. This is a class that is very timely for MIT and for the world. And we were able to give it robust funding so they can take this and develop it even further.
MITHIC has also been able to take local initiatives in SHASS and elevate them. There has been a group of anthropologists, historians, and urban planners that have been working together on a project called the Living Climate Futures Lab. This is a group interested in working with frontline communities around climate change and sustainability. They work to build trust with local communities and start to work with them on thinking about how climate change affects them and what solutions might look like. This is a powerful and uniquely SHASS approach to climate change, and through MITHIC, we’re able to take this seed effort, robustly fund it, and help connect it to the larger climate project at MIT.
Q: What excites you most about the future of MITHIC at MIT?
Yang: We have a lot of MIT efforts that are trying to break people out of their disciplinary silos, and MITHIC really is a big push on that front. It’s a presidential initiative, so it’s high on the priority list of what people are thinking about. We’ve already done our first round, and the second round is going to be even more exciting, so it’s only going to gain in force. In SHASS+, we’re actually having two calls for proposals this academic year instead of just one. I feel like there’s still so much possibility to bring together interdisciplinary research across the Institute.
Makan: I’m excited about how MITHIC is changing the culture of MIT. MIT thinks of itself in terms of engineering, science, and technology, and this is an opportunity to think about those STEM fields within the context of human activity and humanistic thinking. Having this shift at MIT in how we approach solving problems bodes well for the world, and it places SHASS as this connective tissue at the Institute. It connects the schools and it can also connect the other initiatives, such as manufacturing and health and life sciences. There’s an opportunity for MITHIC to seed all these other initiatives with the work that goes on in SHASS.
Study: Identifying kids who need help learning to read isn’t as easy as A, B, CWhile most states mandate screenings to guide early interventions for children struggling with reading, many teachers feel underprepared to administer and interpret them.In most states, schools are required to screen students as they enter kindergarten — a process that is meant to identify students who may need extra help learning to read. However, a new study by MIT researchers suggests that these screenings may not be working as intended in all schools.
The researchers’ survey of about 250 teachers found that many felt they did not receive adequate training to perform the tests, and about half reported that they were not confident that children who need extra instruction in reading end up receiving it.
When performed successfully, these screens can be essential tools to make sure children get the extra help they need to learn to read. However, the new findings suggest that many school districts may need to tweak how they implement the screenings and analyze the results, the researchers say.
“This result demonstrates the need to have a systematic approach for how the basic science on how children learn to read is translated into educational opportunity,” says John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology, a professor of brain and cognitive sciences, and a member of MIT’s McGovern Institute for Brain Research.
Gabrieli is the senior author of the new open-access study, which appears today in Annals of Dyslexia. Ola Ozernov-Palchik, an MIT research scientist who is also a research assistant professor at Boston University Wheelock College of Education and Human Development, is the lead author of the study.
Boosting literacy
Over the past 20 years, national reading proficiency scores in the United States have trended up, but only slightly. In 2022, 33 percent of fourth-graders achieved reading proficiency, compared to 29 percent in 1992, according to the National Assessment of Educational Progress reading report card. (The highest level achieved in the past 20 years was 37 percent, in 2017.)
In hopes of boosting those rates, most states have passed laws requiring students to be screened for potential reading struggles early in elementary school. In most cases, the screenings are required two or three times per year, in kindergarten, first grade, and second grade.
These tests are designed to identify students who have difficulty with skills such as identifying letters and the sounds they make, blending sounds to make words, and recognizing words that rhyme. Students with low scores in these measures can then be offered extra interventions designed to help them catch up.
“The indicators of future reading disability or dyslexia are present as early as within the first few months of kindergarten,” Ozernov-Palchik says. “And there’s also an overwhelming body of evidence showing that interventions are most effective in the earliest grades.”
In the new study, the researchers wanted to evaluate how effectively these screenings are being implemented in schools. With help from the National Center for Improving Literacy, they posted on social media sites seeking classroom teachers and reading specialists who are responsible for administering literacy screening tests.
The survey respondents came from 39 states and represented public and private schools, located in urban, suburban, and rural areas. The researchers asked those teachers dozens of questions about their experience with the literacy screenings, including questions about their training, the testing process itself, and the results of the screenings.
One of the significant challenges reported by the respondents was a lack of training. About 75 percent reported that they received fewer than three hours of training on how to perform the screens, and 44 percent received no training at all or less than an hour of training.
“Under ideal conditions, there is an expert who trains the educators, they provide practice opportunities, they provide feedback, and they observe the educators administer the assessment,” Ozernov-Palchik says. “None of this was done in many of the cases.”
Instead, many educators reported that they spent their own time figuring out how to give the evaluations, sometimes working with colleagues. And, new hires who arrived at a school after the initial training was given were often left on their own to figure it out.
Another major challenge was suboptimal conditions for administering the tests. About 80 percent of teachers reported interruptions during the screenings, and 40 percent had to do the screens in noisy locations such as a school hallway. More than half of the teachers also reported technical difficulties in administering the tests, and that rate was higher among teachers who worked at schools with a higher percentage of students from low socioeconomic (SES) backgrounds.
Teachers also reported difficulties when it came to evaluating students categorized as English language learners (ELL). Many teachers relayed that they hadn’t been trained on how to distinguish students who were having trouble reading from those who struggled on the tests because they didn’t speak English well.
“The study reveals that there’s a lot of difficulty understanding how to handle English language learners in the context of screening,” Ozernov-Palchik says. “Overall, those kids tend to be either over-identified or under-identified as needing help, but they’re not getting the support that they need.”
Unrealized potential
Most concerning, the researchers say, is that in many schools, the results of the screening tests are not being used to get students the extra help that they need. Only 44 percent of the teachers surveyed said that their schools had a formal process for creating intervention plans for students after the screening was performed.
“Even though most educators said they believe that screening is important to do, they’re not feeling that it has the potential to drive change the way that it’s currently implemented,” Ozernov-Palchik says.
In the study, the researchers recommended several steps that state legislatures or individual school districts can take to make the screening process run more smoothly and successfully.
“Implementation is the key here,” Ozernov-Palchik says. “Teachers need more support and professional development. There needs to be systematic support as they administer the screening. They need to have designated spaces for screening, and explicit instruction in how to handle children who are English language learners.”
The researchers also recommend that school districts train an individual to take charge of interpreting the screening results and analyzing the data, to make sure that the screenings are leading to improved success in reading.
In addition to advocating for those changes, the researchers are also working on a technology platform that uses artificial intelligence to provide more individualized instruction in reading, which could help students receive help in the areas where they struggle the most.
The research was funded by Schmidt Sciences, the Chan Zuckerberg Initiative for the Reach Every Reader project, and the Halis Family Foundation.
Astronomical data collection of Taurus Molecular Cloud-1 reveals over 100 different moleculesThe discovery will help researchers understand how chemicals form and change before stars and planets are born.MIT researchers recently studied a region of space called the Taurus Molecular Cloud-1 (TMC-1) and discovered more than 100 different molecules floating in the gas there — more than in any other known interstellar cloud. They used powerful radio telescopes capable of detecting very faint signals across a wide range of wavelengths in the electromagnetic spectrum.
With over 1,400 observing hours on the Green Bank Telescope (GBT) — the world’s largest fully steerable radio telescope, located in West Virginia — researchers in the group of Brett McGuire collected the astronomical data needed to search for molecules in deep space and have made the full dataset publicly available. From these observations, published in The Astrophysical Journal Supplement Series (ApJS), the team censused 102 molecules in TMC-1, a cold interstellar cloud where sunlike stars are born. Most of these molecules are hydrocarbons (made only of carbon and hydrogen) and nitrogen-rich compounds, in contrast to the oxygen-rich molecules found around forming stars. Notably, they also detected 10 aromatic molecules (ring-shaped carbon structures), which make up a small but significant fraction of the carbon in the cloud.
“This project represents the single largest amount of telescope time for a molecular line survey that has been reduced and publicly released to date, enabling the community to pursue discoveries such as biologically relevant organic matter,” said Ci Xue, a postdoc in the McGuire Group and the project’s principal researcher. “This molecular census offers a new benchmark for the initial chemical conditions for the formation of stars and planets.”
To handle the immense dataset, the researchers built an automated system to organize and analyze the results. Using advanced statistical methods, they determined the amounts of each molecule present, including variations containing slightly different atoms (such as carbon-13 or deuterium).
“The data we’re releasing here are the culmination of more than 1,400 hours of observational time on the GBT, one of the NSF’s premier radio telescopes,” says McGuire, the Class of 1943 Career Development Associate Professor of Chemistry. “In 2021, these data led to the discovery of individual PAH molecules in space for the first time, answering a three-decade-old mystery dating back to the 1980s. In the following years, many more and larger PAHs have been discovered in these data, showing that there is indeed a vast and varied reservoir of this reactive organic carbon present at the earliest stages of star and planet formation. There is still so much more science, and so many new molecular discoveries, to be made with these data, but our team feels strongly that datasets like this should be opened to the scientific community, which is why we’re releasing the fully calibrated, reduced, science-ready product freely for anyone to use.”
Overall, this study provides the single largest publicly released molecular line survey to date, enabling the scientific community to pursue discoveries such as biologically relevant molecules. This molecular census offers a new benchmark for understanding the chemical conditions that exist before stars and planets form.
With a new molecule-based method, physicists peer inside an atom’s nucleus An alternative to massive particle colliders, the approach could reveal insights into the universe’s starting ingredients.Physicists at MIT have developed a new way to probe inside an atom’s nucleus, using the atom’s own electrons as “messengers” within a molecule.
In a study appearing today in the journal Science, the physicists precisely measured the energy of electrons whizzing around a radium atom that had been paired with a fluoride atom to make a molecule of radium monofluoride. They used the environments within molecules as a sort of microscopic particle collider, which contained the radium atom’s electrons and encouraged them to briefly penetrate the atom’s nucleus.
Typically, experiments to probe the inside of atomic nuclei involve massive, kilometers-long facilities that accelerate beams of electrons to speeds fast enough to collide with and break apart nuclei. The team’s new molecule-based method offers a table-top alternative to directly probe the inside of an atom’s nucleus.
Within molecules of radium monofluoride, the team measured the energies of a radium atom’s electrons as they pinged around inside the molecule. They discerned a slight energy shift and determined that electrons must have briefly penetrated the radium atom’s nucleus and interacted with its contents. As the electrons winged back out, they retained this energy shift, providing a nuclear “message” that could be analyzed to sense the internal structure of the atom’s nucleus.
The team’s method offers a new way to measure the nuclear “magnetic distribution.” In a nucleus, each proton and neutron acts like a small magnet, and they align differently depending on how the nucleus’ protons and neutrons are spread out. The team plans to apply their method to precisely map this property of the radium nucleus for the first time. What they find could help to answer one of the biggest mysteries in cosmology: Why do we see much more matter than antimatter in the universe?
“Our results lay the groundwork for subsequent studies aiming to measure violations of fundamental symmetries at the nuclear level,” says study co-author Ronald Fernando Garcia Ruiz, who is the Thomas A. Franck Associate Professor of Physics at MIT. “This could provide answers to some of the most pressing questions in modern physics.”
The study’s MIT co-authors include Shane Wilkins, Silviu-Marian Udrescu, and Alex Brinson, along with collaborators from multiple institutions including the Collinear Resonance Ionization Spectroscopy Experiment (CRIS) at CERN in Switzerland, where the experiments were performed.
Molecular trap
According to scientists’ best understanding, there must have been almost equal amounts of matter and antimatter when the universe first came into existence. However, the overwhelming majority of what scientists can measure and observe in the universe is made from matter, whose building blocks are the protons and neutrons within atomic nuclei.
This observation is in stark contrast to what our best theory of nature, the Standard Model, predicts, and it is thought that additional sources of fundamental symmetry violation are required to explain the almost complete absence of antimatter in our universe. Such violations could be seen within the nuclei of certain atoms such as radium.
Unlike most atomic nuclei, which are spherical in shape, the radium atom’s nucleus has a more asymmetrical configuration, similar to a pear. Scientists predict that this pear shape could significantly enhance their ability to sense the violation of fundamental symmetries, to the extent that they may be potentially observable.
“The radium nucleus is predicted to be an amplifier of this symmetry breaking, because its nucleus is asymmetric in charge and mass, which is quite unusual,” says Garcia Ruiz, whose group has focused on developing methods to probe radium nuclei for signs of fundamental symmetry violation.
Peering inside the nucleus of a radium atom to investigate fundamental symmetries is an incredibly tricky exercise.
“Radium is naturally radioactive, with a short lifetime and we can currently only produce radium monofluoride molecules in tiny quantities,” says study lead author Shane Wilkins, a former postdoc at MIT. “We therefore need incredibly sensitive techniques to be able measure them.”
The team realized that by placing a radium atom in a molecule, they could contain and amplify the behavior of its electrons.
“When you put this radioactive atom inside of a molecule, the internal electric field that its electrons experience is orders of magnitude larger compared to the fields we can produce and apply in a lab,” explains Silviu-Marian Udrescu PhD ’24, a study co-author. “In a way, the molecule acts like a giant particle collider and gives us a better chance to probe the radium’s nucleus.”
Energy shift
In their new study, the team first paired radium atoms with fluoride atoms to create molecules of radium monofluoride. They found that in this molecule, the radium atom’s electrons were effectively squeezed, increasing the chance for electrons to interact with and briefly penetrate the radium nucleus.
The team then trapped and cooled the molecules and sent them through a system of vacuum chambers, into which they also sent lasers, which interacted with the molecules. In this way the researchers were able to precisely measure the energies of electrons inside each molecule.
When they tallied the energies, they found that the electrons appeared to have a slightly different energy compared to what physicists expect if they did not penetrate the nucleus. Although this energy shift was small — just a millionth of the energy of the laser photon used to excite the molecules — it gave unambiguous evidence of the molecules’ electrons interacting with the protons and neutrons inside the radium nucleus.
“There are many experiments measuring interactions between nuclei and electrons outside the nucleus, and we know what those interactions look like,” Wilkins explains. “When we went to measure these electron energies very precisely, it didn’t quite add up to what we expected assuming they interacted only outside of the nucleus. That told us the difference must be due to electron interactions inside the nucleus.”
“We now have proof that we can sample inside the nucleus,” Garcia Ruiz says. “It’s like being able to measure a battery’s electric field. People can measure its field outside, but to measure inside the battery is far more challenging. And that’s what we can do now.”
Going forward, the team plans to apply the new technique to map the distribution of forces inside the nucleus. Their experiments have so far involved radium nuclei that sit in random orientations inside each molecule at high temperature. Garcia Ruiz and his collaborators would like to be able to cool these molecules and control the orientations of their pear-shaped nuclei such that they can precisely map their contents and hunt for the violation of fundamental symmetries.
“Radium-containing molecules are predicted to be exceptionally sensitive systems in which to search for violations of the fundamental symmetries of nature,” Garcia Ruiz says. “We now have a way to carry out that search.”
This research was supported, in part, by the U.S. Department of Energy.
Five with MIT ties elected to National Academy of Medicine for 2025Professors Facundo Batista and Dina Katabi, along with three additional MIT alumni, are honored for their outstanding professional achievement and commitment to service.On Oct. 20 during its annual meeting, the National Academy of Medicine announced the election of 100 new members, including MIT faculty members Dina Katabi and Facundo Batista, along with three additional MIT alumni.
Election to the National Academy of Medicine (NAM) is considered one of the highest honors in the fields of health and medicine, recognizing individuals who have demonstrated outstanding professional achievement and commitment to service.
Facundo Batista is the associate director and scientific director of the Ragon Institute of MGH, MIT and Harvard, as well as the first Phillip T. and Susan M. Ragon Professor in the MIT Department of Biology. The National Academy of Medicine recognized Batista for “his work unraveling the biology of antibody-producing B cells to better understand how our body’s immune systems responds to infectious disease.” More recently, Facundo’s research has advanced preclinical vaccine and therapeutic development for globally important diseases including HIV, malaria, and influenza.
Batista earned a PhD from the International School of Advanced Studies and established his lab in 2002 as a member of the Francis Crick Institute (formerly the London Research Institute), simultaneously holding a professorship at Imperial College London. In 2016, he joined the Ragon Institute to pursue a new research program applying his expertise in B cells and antibody responses to vaccine development, and preclinical vaccinology for diseases including SARS-CoV-2 and HIV. Batista is an elected fellow or member of the U.K. Academy of Medical Sciences, the American Academy of Microbiology, the Academia de Ciencias de América Latina, and the European Molecular Biology Organization, and he is chief editor of The EMBO Journal.
Dina Katabi SM ’99, PhD ’03 is the Thuan (1990) and Nicole Pham Professor in the Department of Electrical Engineering and Computer Science at MIT. Her research spans digital health, wireless sensing, mobile computing, machine learning, and computer vision. Katabi’s contributions include efficient communication protocols for the internet, advanced contactless biosensors, and novel AI models that interpret physiological signals. The NAM recognized Katabi for “pioneering digital health technology that enables non-invasive, off-body remote health monitoring via AI and wireless signals, and for developing digital biomarkers for Parkinson’s progression and detection. She has translated this technology to advance objective, sensitive measures of disease trajectory and treatment response in clinical trials.”
Katabi is director of the MIT Center for Wireless Networks and Mobile Computing. She is also a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL), where she leads the Networks at MIT Research Group. Katabi received a bachelor’s degree from the University of Damascus and MS and PhD degrees in computer science from MIT. She is a MacArthur Fellow; a member of the American Academy of Arts and Sciences, National Academy of Sciences, and National Academy of Engineering; and a recipient of the ACM Computing Prize.
Additional MIT alumni who were elected to the NAM for 2025 are:
Established originally as the Institute of Medicine in 1970 by the National Academy of Sciences, the National Academy of Medicine addresses critical issues in health, science, medicine, and related policy, and inspires positive actions across sectors.
“I am deeply honored to welcome these extraordinary health and medicine leaders and researchers into the National Academy of Medicine,” says NAM President Victor J. Dzau. “Their demonstrated excellence in tackling public health challenges, leading major discoveries, improving health care, advancing health policy, and addressing health equity will critically strengthen our collective ability to tackle the most pressing health challenges of our time.”
Neural activity helps circuit connections mature into optimal signal transmittersScientists identified how circuit connections in fruit flies tune to the right size and degree of signal transmission capability. Understanding this could lead to a way to tweak abnormal signal transmission in certain disorders.Nervous system functions, from motion to perception to cognition, depend on the active zones of neural circuit connections, or “synapses,” sending out the right amount of their chemical signals at the right times. By tracking how synaptic active zones form and mature in fruit flies, researchers at The Picower Institute for Learning and Memory at MIT have revealed a fundamental model for how neural activity during development builds properly working connections.
Understanding how that happens is important, not only for advancing fundamental knowledge about how nervous systems develop, but also because many disorders such as epilepsy, autism, or intellectual disability can arise from aberrations of synaptic transmission, says senior author Troy Littleton, the Menicon Professor in The Picower Institute and MIT’s Department of Biology. The new findings, funded in part by a 2021 grant from the National Institutes of Health, provide insights into how active zones develop the ability to send neurotransmitters across synapses to their circuit targets. It’s not instant or predestined, the study shows. It can take days to fully mature, and that is regulated by neural activity.
If scientists can fully understand the process, Littleton says, then they can develop molecular strategies to intervene to tweak synaptic transmission when it’s happening too much or too little in disease.
“We’d like to have the levers to push to make synapses stronger or weaker, that’s for sure,” Littleton says. “And so knowing the full range of levers we can tug on to potentially change output would be exciting.”
Littleton Lab research scientist Yuliya Akbergenova led the study published Oct. 14 in the Journal of Neuroscience.
How newborn synapses grow up
In the study, the researchers examined neurons that send the neurotransmitter glutamate across synapses to control muscles in the fly larvae. To study how the active zones in the animals matured, the scientists needed to keep track of their age. That hasn’t been possible before, but Akbergenova overcame the barrier by cleverly engineering the fluorescent protein mMaple, which changes its glow from green to red when zapped with 15 seconds of ultraviolet light, into a component of the glutamate receptors on the receiving side of the synapse. Then, whenever she wanted, she could shine light and all the synapses already formed before that time would glow red, and any new ones that formed subsequently would glow green.
With the ability to track each active zone’s birthday, the authors could then document how active zones developed their ability to increase output over the course of days after birth. The researchers actually watched as synapses were built over many hours by tagging each of eight kinds of proteins that make up an active zone. At first, the active zones couldn’t transmit anything. Then, as some essential early proteins accumulated, they could send out glutamate spontaneously, but not if evoked by electrical stimulation of their host neuron (simulating how that neuron might be signaled naturally in a circuit). Only after several more proteins arrived did active zones possess the mature structure for calcium ions to trigger the fusion of glutamate vesicles to the cell membrane for evoked release across the synapse.
Activity matters
Of course, construction does not go on forever. At some point, the fly larva stops building one synapse and then builds new ones further down the line as the neuronal axon expands to keep up with growing muscles. The researchers wondered whether neural activity had a role in driving that process of finishing up one active zone and moving on to build the next.
To find out, they employed two different interventions to block active zones from being able to release glutamate, thereby preventing synaptic activity. Notably, one of the methods they chose was blocking the action of a protein called Synaptotagmin 1. That’s important because mutations that disrupt the protein in humans are associated with severe intellectual disability and autism. Moreover, the researchers tailored the activity-blocking interventions to just one neuron in each larva because blocking activity in all their neurons would have proved lethal.
In neurons where the researchers blocked activity, they observed two consequences: the neurons stopped building new active zones and instead kept making existing active zones larger and larger. It was as if the neuron could tell the active zone wasn’t releasing glutamate and tried to make it work by giving it more protein material to work with. That effort came at the expense of starting construction on new active zones.
“I think that what it’s trying to do is compensate for the loss of activity,” Littleton says.
Testing indicated that the enlarged active zones the neurons built in hopes of restarting activity were functional (or would have been if the researchers weren’t artificially blocking them). This suggested that the way the neuron sensed that glutamate wasn’t being released was therefore likely to be a feedback signal from the muscle side of the synapse. To test that, the scientists knocked out a glutamate receptor component in the muscle, and when they did, they found that the neurons no longer made their active zones larger.
Littleton says the lab is already looking into the new questions the discoveries raise. In particular: What are the molecular pathways that initiate synapse formation in the first place, and what are the signals that tell an active zone it has finished growing? Finding those answers will bring researchers closer to understanding how to intervene when synaptic active zones aren’t developing properly.
In addition to Littleton and Akbergenova, the paper’s other authors are Jessica Matthias and Sofya Makeyeva.
In addition to the National Institutes of Health, The Freedom Together Foundation provided funding for the study.