Within the past few years, models that can predict the structure or function of proteins have been widely used for a variety of biological applications, such as identifying drug targets and designing new therapeutic antibodies.
These models, which are based on large language models (LLMs), can make very accurate predictions of a protein’s suitability for a given application. However, there’s no way to determine how these models make their predictions or which protein features play the most important role in those decisions.
In a new study, MIT researchers have used a novel technique to open up that “black box” and allow them to determine what features a protein language model takes into account when making predictions. Understanding what is happening inside that black box could help researchers to choose better models for a particular task, helping to streamline the process of identifying new drugs or vaccine targets.
“Our work has broad implications for enhanced explainability in downstream tasks that rely on these representations,” says Bonnie Berger, the Simons Professor of Mathematics, head of the Computation and Biology group in MIT’s Computer Science and Artificial Intelligence Laboratory, and the senior author of the study. “Additionally, identifying features that protein language models track has the potential to reveal novel biological insights from these representations.”
Onkar Gujral, an MIT graduate student, is the lead author of the study, which appears this week in the Proceedings of the National Academy of Sciences. Mihir Bafna, an MIT graduate student, and Eric Alm, an MIT professor of biological engineering, are also authors of the paper.
Opening the black box
In 2018, Berger and former MIT graduate student Tristan Bepler PhD ’20 introduced the first protein language model. Their model, like subsequent protein models that accelerated the development of AlphaFold, such as ESM2 and OmegaFold, was based on LLMs. These models, which include ChatGPT, can analyze huge amounts of text and figure out which words are most likely to appear together.
Protein language models use a similar approach, but instead of analyzing words, they analyze amino acid sequences. Researchers have used these models to predict the structure and function of proteins, and for applications such as identifying proteins that might bind to particular drugs.
In a 2021 study, Berger and colleagues used a protein language model to predict which sections of viral surface proteins are less likely to mutate in a way that enables viral escape. This allowed them to identify possible targets for vaccines against influenza, HIV, and SARS-CoV-2.
However, in all of these studies, it has been impossible to know how the models were making their predictions.
“We would get out some prediction at the end, but we had absolutely no idea what was happening in the individual components of this black box,” Berger says.
In the new study, the researchers wanted to dig into how protein language models make their predictions. Just like LLMs, protein language models encode information as representations that consist of a pattern of activation of different “nodes” within a neural network. These nodes are analogous to the networks of neurons that store memories and other information within the brain.
The inner workings of LLMs are not easy to interpret, but within the past couple of years, researchers have begun using a type of algorithm known as a sparse autoencoder to help shed some light on how those models make their predictions. The new study from Berger’s lab is the first to use this algorithm on protein language models.
Sparse autoencoders work by adjusting how a protein is represented within a neural network. Typically, a given protein will be represented by a pattern of activation of a constrained number of neurons, for example, 480. A sparse autoencoder will expand that representation into a much larger number of nodes, say 20,000.
When information about a protein is encoded by only 480 neurons, each node lights up for multiple features, making it very difficult to know what features each node is encoding. However, when the neural network is expanded to 20,000 nodes, this extra space along with a sparsity constraint gives the information room to “spread out.” Now, a feature of the protein that was previously encoded by multiple nodes can occupy a single node.
“In a sparse representation, the neurons lighting up are doing so in a more meaningful manner,” Gujral says. “Before the sparse representations are created, the networks pack information so tightly together that it's hard to interpret the neurons.”
Interpretable models
Once the researchers obtained sparse representations of many proteins, they used an AI assistant called Claude (related to the popular Anthropic chatbot of the same name), to analyze the representations. In this case, they asked Claude to compare the sparse representations with the known features of each protein, such as molecular function, protein family, or location within a cell.
By analyzing thousands of representations, Claude can determine which nodes correspond to specific protein features, then describe them in plain English. For example, the algorithm might say, “This neuron appears to be detecting proteins involved in transmembrane transport of ions or amino acids, particularly those located in the plasma membrane.”
This process makes the nodes far more “interpretable,” meaning the researchers can tell what each node is encoding. They found that the features most likely to be encoded by these nodes were protein family and certain functions, including several different metabolic and biosynthetic processes.
“When you train a sparse autoencoder, you aren’t training it to be interpretable, but it turns out that by incentivizing the representation to be really sparse, that ends up resulting in interpretability,” Gujral says.
Understanding what features a particular protein model is encoding could help researchers choose the right model for a particular task, or tweak the type of input they give the model, to generate the best results. Additionally, analyzing the features that a model encodes could one day help biologists to learn more about the proteins that they are studying.
“At some point when the models get a lot more powerful, you could learn more biology than you already know, from opening up the models,” Gujral says.
The research was funded by the National Institutes of Health.
Planets without water could still produce certain liquids, a new study findsLab experiments show “ionic liquids” can form through common planetary processes and might be capable of supporting life even on waterless planets.Water is essential for life on Earth. So, the liquid must be a requirement for life on other worlds. For decades, scientists’ definition of habitability on other planets has rested on this assumption.
But what makes some planets habitable might have very little to do with water. In fact, an entirely different type of liquid could conceivably support life in worlds where water can barely exist. That’s a possibility that MIT scientists raise in a study appearing this week in the Proceedings of the National Academy of Sciences.
From lab experiments, the researchers found that a type of fluid known as an ionic liquid can readily form from chemical ingredients that are also expected to be found on the surface of some rocky planets and moons. Ionic liquids are salts that exist in liquid form below about 100 degrees Celsius. The team’s experiments showed that a mixture of sulfuric acid and certain nitrogen-containing organic compounds produced such a liquid. On rocky planets, sulfuric acid may be a byproduct of volcanic activity, while nitrogen-containing compounds have been detected on several asteroids and planets in our solar system, suggesting the compounds may be present in other planetary systems.
The scientists propose that, even on planets that are too warm or that have atmospheres are too low-pressure to support liquid water, there could still be pockets of ionic liquid. And where there is liquid, there may be potential for life, though likely not anything that resembles Earth’s water-based beings.
Ionic liquids have extremely low vapor pressure and do not evaporate; they can form and persist at higher temperatures and lower pressures than what liquid water can tolerate. The researchers note that ionic liquid can be a hospitable environment for some biomolecules, such as certain proteins that can remain stable in the fluid.
“We consider water to be required for life because that is what’s needed for Earth life. But if we look at a more general definition, we see that what we need is a liquid in which metabolism for life can take place,” says Rachana Agrawal, who led the study as a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “Now if we include ionic liquid as a possibility, this can dramatically increase the habitability zone for all rocky worlds.”
The study’s MIT co-authors are Sara Seager, the Class of 1941 Professor of Planetary Sciences in the Department of Earth, Atmospheric and Planetary Sciences and a professor in the departments of Physics and of Aeronautics and Astronautics, along with Iaroslav Iakubivskyi, Weston Buchanan, Ana Glidden, and Jingcheng Huang. Co-authors also include Maxwell Seager of Worcester Polytechnic Institute, William Bains of Cardiff University, and Janusz Petkowski of Wroclaw University of Science and Technology, in Poland.
A liquid leap
The team’s work with ionic liquid grew out of an effort to search for signs of life on Venus, where clouds of sulfuric acid envelope the planet in a noxious haze. Despite its toxicity, Venus’ clouds may contain signs of life — a notion that scientists plan to test with upcoming missions to the planet’s atmosphere.
Agrawal and Seager, who is leading the Morning Star Missions to Venus, were investigating ways to collect and evaporate sulfuric acid. If a mission collects samples from Venus’ clouds, sulfuric acid would have to be evaporated away in order to reveal any residual organic compounds that could then be analyzed for signs of life.
The researchers were using their custom, low-pressure system designed to evaporate away excess sulfuric acid, to test evaporation of a solution of the acid and an organic compound, glycine. They found that in every case, while most of the liquid sulfuric acid evaporated, a stubborn layer of liquid always remained. They soon realized that sulfuric acid was chemically reacting with glycine, resulting in an exchange of hydrogen atoms from the acid to the organic compound. The result was a fluid mixture of salts, or ions, known as an ionic liquid, that persists as a liquid across a wide range of temperatures and pressures.
This accidental finding kickstarted an idea: Could ionic liquid form on planets that are too warm and host atmospheres too thin for water to exist?
“From there, we took the leap of imagination of what this could mean,” Agrawal says. “Sulfuric acid is found on Earth from volcanoes, and organic compounds have been found on asteroids and other planetary bodies. So, this led us to wonder if ionic liquids could potentially form and exist naturally on exoplanets.”
Rocky oases
On Earth, ionic liquids are mainly synthesized for industrial purposes. They do not occur naturally, except for in one specific case, in which the liquid is generated from the mixing of venoms produced by two rival species of ants.
The team set out to investigate what conditions ionic liquid could be naturally produced in, and over what range of temperatures and pressures. In the lab, they mixed sulfuric acid with various nitrogen-containing organic compounds. In previous work, Seager’s team had found that the compounds, some of which can be considered ingredients associated with life, are surprisingly stable in sulfuric acid.
“In high school, you learn that an acid wants to donate a proton,” Seager says. “And oddly enough, we knew from our past work with sulfuric acid (the main component of Venus’ clouds) and nitrogen-containing compounds, that a nitrogen wants to receive a hydrogen. It’s like one person’s trash is another person’s treasure.”
The reaction could produce a bit of ionic liquid if the sulfuric acid and nitrogen-containing organics were in a one-to-one ratio — a ratio that was not a focus of the prior work. For their new study, Seager and Agrawal mixed sulfuric acid with over 30 different nitrogen-containing organic compounds, across a range of temperatures and pressures, then observed whether ionic liquid formed when they evaporated away the sulfuric acid in various vials. They also mixed the ingredients onto basalt rocks, which are known to exist on the surface of many rocky planets.
The team found that the reactions produced ionic liquid at temperatures up to 180 degrees Celsius and at extremely low pressures — much lower than that of the Earth’s atmosphere. Their results suggest that ionic liquid could naturally form on other planets where liquid water cannot exist, under the right conditions.
“We were just astonished that the ionic liquid forms under so many different conditions,” Seager says. “If you put the sulfuric acid and the organic on a rock, the excess sulfuric acid seeps into the rock pores, but you’re still left with a drop of ionic liquid on the rock. Whatever we tried, ionic liquid still formed.”
“We’re envisioning a planet warmer than Earth, that doesn’t have water, and at some point in its past or currently, it has to have had sulfuric acid, formed from volcanic outgassing,” Seager says. “This sulfuric acid has to flow over a little pocket of organics. And organic deposits are extremely common in the solar system.”
Then, she says, the resulting pockets of liquid could stay on the planet’s surface, potentially for years or millenia, where they could theoretically serve as small oases for simple forms of ionic-liquid-based life. Going forward, Seager’s team plans to investigate further, to see what biomolecules, and ingredients for life, might survive, and thrive, in ionic liquid.
“We just opened up a Pandora’s box of new research,” Seager says. “It’s been a real journey.”
This research was supported, in part, by the Sloan Foundation and the Volkswagen Foundation.
AI helps chemists develop tougher plasticsResearchers created polymers that are more resistant to tearing by incorporating stress-responsive molecules identified by a machine-learning model.A new strategy for strengthening polymer materials could lead to more durable plastics and cut down on plastic waste, according to researchers at MIT and Duke University.
Using machine learning, the researchers identified crosslinker molecules that can be added to polymer materials, allowing them to withstand more force before tearing. These crosslinkers belong to a class of molecules known as mechanophores, which change their shape or other properties in response to mechanical force.
“These molecules can be useful for making polymers that would be stronger in response to force. You apply some stress to them, and rather than cracking or breaking, you instead see something that has higher resilience,” says Heather Kulik, the Lammot du Pont Professor of Chemical Engineering at MIT, who is also a professor of chemistry and the senior author of the study.
The crosslinkers that the researchers identified in this study are iron-containing compounds known as ferrocenes, which until now had not been broadly explored for their potential as mechanophores. Experimentally evaluating a single mechanophore can take weeks, but the researchers showed that they could use a machine-learning model to dramatically speed up this process.
MIT postdoc Ilia Kevlishvili is the lead author of the open-access paper, which appeared Friday in ACS Central Science. Other authors include Jafer Vakil, a Duke graduate student; David Kastner and Xiao Huang, both MIT graduate students; and Stephen Craig, a professor of chemistry at Duke.
The weakest link
Mechanophores are molecules that respond to force in unique ways, typically by changing their color, structure, or other properties. In the new study, the MIT and Duke team wanted to investigate whether they could be used to help make polymers more resilient to damage.
The new work builds on a 2023 study from Craig and Jeremiah Johnson, the A. Thomas Guertin Professor of Chemistry at MIT, and their colleagues. In that work, the researchers found that, surprisingly, incorporating weak crosslinkers into a polymer network can make the overall material stronger. When materials with these weak crosslinkers are stretched to the breaking point, any cracks propagating through the material try to avoid the stronger bonds and go through the weaker bonds instead. This means the crack has to break more bonds than it would if all of the bonds were the same strength.
To find new ways to exploit that phenomenon, Craig and Kulik joined forces to try to identify mechanophores that could be used as weak crosslinkers.
“We had this new mechanistic insight and opportunity, but it came with a big challenge: Of all possible compositions of matter, how do we zero in on the ones with the greatest potential?” Craig says. “Full credit to Heather and Ilia for both identifying this challenge and devising an approach to meet it.”
Discovering and characterizing mechanophores is a difficult task that requires either time-consuming experiments or computationally intense simulations of molecular interactions. Most of the known mechanophores are organic compounds, such as cyclobutane, which was used as a crosslinker in the 2023 study.
In the new study, the researchers wanted to focus on molecules known as ferrocenes, which are believed to hold potential as mechanophores. Ferrocenes are organometallic compounds that have an iron atom sandwiched between two carbon-containing rings. Those rings can have different chemical groups added to them, which alter their chemical and mechanical properties.
Many ferrocenes are used as pharmaceuticals or catalysts, and a handful are known to be good mechanophores, but most have not been evaluated for that use. Experimental tests on a single potential mechanophore can take several weeks, and computational simulations, while faster, still take a couple of days. Evaluating thousands of candidates using these strategies is a daunting task.
Realizing that a machine-learning approach could dramatically speed up the characterization of these molecules, the MIT and Duke team decided to use a neural network to identify ferrocenes that could be promising mechanophores.
They began with information from a database known as the Cambridge Structural Database, which contains the structures of 5,000 different ferrocenes that have already been synthesized.
“We knew that we didn’t have to worry about the question of synthesizability, at least from the perspective of the mechanophore itself. This allowed us to pick a really large space to explore with a lot of chemical diversity, that also would be synthetically realizable,” Kevlishvili says.
First, the researchers performed computational simulations for about 400 of these compounds, allowing them to calculate how much force is necessary to pull atoms apart within each molecule. For this application, they were looking for molecules that would break apart quickly, as these weak links could make polymer materials more resistant to tearing.
Then they used this data, along with information on the structure of each compound, to train a machine-learning model. This model was able to predict the force needed to activate the mechanophore, which in turn influences resistance to tearing, for the remaining 4,500 compounds in the database, plus an additional 7,000 compounds that are similar to those in the database but have some atoms rearranged.
The researchers discovered two main features that seemed likely to increase tear resistance. One was interactions between the chemical groups that are attached to the ferrocene rings. Additionally, the presence of large, bulky molecules attached to both rings of the ferrocene made the molecule more likely to break apart in response to applied forces.
While the first of these features was not surprising, the second trait was not something a chemist would have predicted beforehand, and could not have been detected without AI, the researchers say. “This was something truly surprising,” Kulik says.
Tougher plastics
Once the researchers identified about 100 promising candidates, Craig’s lab at Duke synthesized a polymer material incorporating one of them, known as m-TMS-Fc. Within the material, m-TMS-Fc acts as a crosslinker, connecting the polymer strands that make up polyacrylate, a type of plastic.
By applying force to each polymer until it tore, the researchers found that the weak m-TMS-Fc linker produced a strong, tear-resistant polymer. This polymer turned out to be about four times tougher than polymers made with standard ferrocene as the crosslinker.
“That really has big implications because if we think of all the plastics that we use and all the plastic waste accumulation, if you make materials tougher, that means their lifetime will be longer. They will be usable for a longer period of time, which could reduce plastic production in the long term,” Kevlishvili says.
The researchers now hope to use their machine-learning approach to identify mechanophores with other desirable properties, such as the ability to change color or become catalytically active in response to force. Such materials could be used as stress sensors or switchable catalysts, and they could also be useful for biomedical applications such as drug delivery.
In those studies, the researchers plan to focus on ferrocenes and other metal-containing mechanophores that have already been synthesized but whose properties are not fully understood.
“Transition metal mechanophores are relatively underexplored, and they’re probably a little bit more challenging to make,” Kulik says. “This computational workflow can be broadly used to enlarge the space of mechanophores that people have studied.”
The research was funded by the National Science Foundation Center for the Chemistry of Molecularly Optimized Networks (MONET).
MIT tool visualizes and edits “physically impossible” objectsBy visualizing Escher-like optical illusions in 2.5 dimensions, the “Meschers” tool could help scientists understand physics-defying shapes and spark new designs.M.C. Escher’s artwork is a gateway into a world of depth-defying optical illusions, featuring “impossible objects” that break the laws of physics with convoluted geometries. What you perceive his illustrations to be depends on your point of view — for example, a person seemingly walking upstairs may be heading down the steps if you tilt your head sideways.
Computer graphics scientists and designers can recreate these illusions in 3D, but only by bending or cutting a real shape and positioning it at a particular angle. This workaround has downsides, though: Changing the smoothness or lighting of the structure will expose that it isn’t actually an optical illusion, which also means you can’t accurately solve geometry problems on it.
Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a unique approach to represent “impossible” objects in a more versatile way. Their “Meschers” tool converts images and 3D models into 2.5-dimensional structures, creating Escher-like depictions of things like windows, buildings, and even donuts. The approach helps users relight, smooth out, and study unique geometries while preserving their optical illusion.
This tool could assist geometry researchers with calculating the distance between two points on a curved impossible surface (“geodesics”) and simulating how heat dissipates over it (“heat diffusion”). It could also help artists and computer graphics scientists create physics-breaking designs in multiple dimensions.
Lead author and MIT PhD student Ana Dodik aims to design computer graphics tools that aren’t limited to replicating reality, enabling artists to express their intent independently of whether a shape can be realized in the physical world. “Using Meschers, we’ve unlocked a new class of shapes for artists to work with on the computer,” she says. “They could also help perception scientists understand the point at which an object truly becomes impossible.”
Dodik and her colleagues will present their paper at the SIGGRAPH conference in August.
Making impossible objects possible
Impossible objects can’t be fully replicated in 3D. Their constituent parts often look plausible, but these parts don’t glue together properly when assembled in 3D. But what can be computationally imitated, as the CSAIL researchers found out, is the process of how we perceive these shapes.
Take the Penrose Triangle, for instance. The object as a whole is physically impossible because the depths don’t “add up,” but we can recognize real-world 3D shapes (like its three L-shaped corners) within it. These smaller regions can be realized in 3D — a property called “local consistency” — but when we try to assemble them together, they don’t form a globally consistent shape.
The Meschers approach models’ locally consistent regions without forcing them to be globally consistent, piecing together an Escher-esque structure. Behind the scenes, Meschers represents impossible objects as if we know their x and y coordinates in the image, as well as differences in z coordinates (depth) between neighboring pixels; the tool uses these differences in depth to reason about impossible objects indirectly.
The many uses of Meschers
In addition to rendering impossible objects, Meschers can subdivide their structures into smaller shapes for more precise geometry calculations and smoothing operations. This process enabled the researchers to reduce visual imperfections of impossible shapes, such as a red heart outline they thinned out.
The researchers also tested their tool on an “impossibagel,” where a bagel is shaded in a physically impossible way. Meschers helped Dodik and her colleagues simulate heat diffusion and calculate geodesic distances between different points of the model.
“Imagine you’re an ant traversing this bagel, and you want to know how long it’ll take you to get across, for example,” says Dodik. “In the same way, our tool could help mathematicians analyze the underlying geometry of impossible shapes up close, much like how we study real-world ones.”
Much like a magician, the tool can create optical illusions out of otherwise practical objects, making it easier for computer graphics artists to create impossible objects. It can also use “inverse rendering” tools to convert drawings and images of impossible objects into high-dimensional designs.
“Meschers demonstrates how computer graphics tools don’t have to be constrained by the rules of physical reality,” says senior author Justin Solomon, associate professor of electrical engineering and computer science and leader of the CSAIL Geometric Data Processing Group. “Incredibly, artists using Meschers can reason about shapes that we will never find in the real world.”
Meschers can also aid computer graphics artists with tweaking the shading of their creations, while still preserving an optical illusion. This versatility would allow creatives to change the lighting of their art to depict a wider variety of scenes (like a sunrise or sunset) — as Meschers demonstrated by relighting a model of a dog on a skateboard.
Despite its versatility, Meschers is just the start for Dodik and her colleagues. The team is considering designing an interface to make the tool easier to use while building more elaborate scenes. They’re also working with perception scientists to see how the computer graphics tool can be used more broadly.
Dodik and Solomon wrote the paper with CSAIL affiliates Isabella Yu ’24, SM ’25; PhD student Kartik Chandra SM ’23; MIT professors Jonathan Ragan-Kelley and Joshua Tenenbaum; and MIT Assistant Professor Vincent Sitzmann.
Their work was supported, in part, by the MIT Presidential Fellowship, the Mathworks Fellowship, the Hertz Foundation, the U.S. National Science Foundation, the Schmidt Sciences AI2050 fellowship, MIT Quest for Intelligence, the U.S. Army Research Office, U.S. Air Force Office of Scientific Research, SystemsThatLearn@CSAIL initiative, Google, the MIT–IBM Watson AI Laboratory, from the Toyota–CSAIL Joint Research Center, Adobe Systems, the Singapore Defence Science and Technology Agency, and the U.S. Intelligence Advanced Research Projects Activity.
In the push to shrink and enhance technologies that control light, MIT researchers have unveiled a new platform that pushes the limits of modern optics through nanophotonics, the manipulation of light on the nanoscale, or billionths of a meter.
The result is a class of ultracompact optical devices that are not only smaller and more efficient than existing technologies, but also dynamically tunable, or switchable, from one optical mode to another. Until now, this has been an elusive combination in nanophotonics.
The work is reported in the July 8 issue of Nature Photonics.
“This work marks a significant step toward a future in which nanophotonic devices are not only compact and efficient, but also reprogrammable and adaptive, capable of dynamically responding to external inputs. The marriage of emerging quantum materials and established nanophotonics architectures will surely bring advances to both fields,” says Riccardo Comin, MIT’s Class of 1947 Career Development Associate Professor of Physics and leader of the work. Comin is also affiliated with MIT’s Materials Research Laboratory and Research Laboratory of Electronics (RLE).
Comin’s colleagues on the work are Ahmet Kemal Demir, an MIT graduate student in physics; Luca Nessi, a former MIT postdoc who is now a postdoc at Politecnico di Milano; Sachin Vaidya, a postdoc in RLE; Connor A. Occhialini PhD ’24, who is now a postdoc at Columbia University; and Marin Soljačić, the Cecil and Ida Green Professor of Physics at MIT.
Demir and Nessi are co-first authors of the Nature Photonics paper.
Toward new nanophotonic materials
Nanophotonics has traditionally relied on materials like silicon, silicon nitride, or titanium dioxide. These are the building blocks of devices that guide and confine light using structures such as waveguides, resonators, and photonic crystals. The latter are periodic arrangements of materials that control how light propagates, much like how a semiconductor crystal affects electron motion.
While highly effective, these materials are constrained by two major limitations. The first involves their refractive indices. These are a measure of how strongly a material interacts with light; the higher the refractive index, the more the material “grabs” or interacts with the light, bending it more sharply and slowing it down more. The refractive indices of silicon and other traditional nanophotonic materials are often modest, which limits how tightly light can be confined and how small optical devices can be made.
A second major limitation of traditional nanophotonic materials: once a structure is fabricated, its optical behavior is essentially fixed. There is usually no way to significantly reconfigure how it responds to light without physically altering it. “Tunability is essential for many next-gen photonics applications, enabling adaptive imaging, precision sensing, reconfigurable light sources, and trainable optical neural networks,” says Vaidya.
Introducing chromium sulfide bromide
These are the longstanding challenges that chromium sulfide bromide (CrSBr) is poised to solve. CrSBr is a layered quantum material with a rare combination of magnetic order and strong optical response. Central to its unique optical properties are excitons: quasiparticles formed when a material absorbs light and an electron is excited, leaving behind a positively charged “hole.” The electron and hole remain bound together by electrostatic attraction, forming a sort of neutral particle that can strongly interact with light.
In CrSBr, excitons dominate the optical response and are highly sensitive to magnetic fields, which means they can be manipulated using external controls.
Because of these excitons, CrSBr exhibits an exceptionally large refractive index that allows researchers to sculpt the material to fabricate optical structures like photonic crystals that are up to an order of magnitude thinner than those made from traditional materials. “We can make optical structures as thin as 6 nanometers, or just seven layers of atoms stacked on top of each other,” says Demir.
And crucially, by applying a modest magnetic field, the MIT researchers were able to continuously and reversibly switch the optical mode. In other words, they demonstrated the ability to dynamically change how light flows through the nanostructure, all without any moving parts or changes in temperature. “This degree of control is enabled by a giant, magnetically induced shift in the refractive index, far beyond what is typically achievable in established photonic materials,” says Demir.
In fact, the interaction between light and excitons in CrSBr is so strong that it leads to the formation of polaritons, hybrid light-matter particles that inherit properties from both components. These polaritons enable new forms of photonic behavior, such as enhanced nonlinearities and new regimes of quantum light transport. And unlike conventional systems that require external optical cavities to reach this regime, CrSBr supports polaritons intrinsically.
While this demonstration uses standalone CrSBr flakes, the material can also be integrated into existing photonic platforms, such as integrated photonic circuits. This makes CrSBr immediately relevant to real-world applications, where it can serve as a tunable layer or component in otherwise passive devices.
The MIT results were achieved at very cold temperatures of up to 132 kelvins (-222 degrees Fahrenheit). Although this is below room temperature, there are compelling use cases, such as quantum simulation, nonlinear optics, and reconfigurable polaritonic platforms, where the unparalleled tunability of CrSBr could justify operation in cryogenic environments.
In other words, says Demir, “CrSBr is so unique with respect to other common materials that even going down to cryogenic temperatures will be worth the trouble, hopefully.”
That said, the team is also exploring related materials with higher magnetic ordering temperatures to enable similar functionality at more accessible conditions.
This work was supported by the U.S. Department of Energy, the U.S. Army Research Office, and a MathWorks Science Fellowship. The work was performed in part at MIT.nano.
How the brain distinguishes oozing fluids from solid objectsA new study finds parts of the brain’s visual cortex are specialized to analyze either solid objects or flowing materials like water or sand.Imagine a ball bouncing down a flight of stairs. Now think about a cascade of water flowing down those same stairs. The ball and the water behave very differently, and it turns out that your brain has different regions for processing visual information about each type of physical matter.
In a new study, MIT neuroscientists have identified parts of the brain’s visual cortex that respond preferentially when you look at “things” — that is, rigid or deformable objects like a bouncing ball. Other brain regions are more activated when looking at “stuff” — liquids or granular substances such as sand.
This distinction, which has never been seen in the brain before, may help the brain plan how to interact with different kinds of physical materials, the researchers say.
“When you’re looking at some fluid or gooey stuff, you engage with it in different way than you do with a rigid object. With a rigid object, you might pick it up or grasp it, whereas with fluid or gooey stuff, you probably are going to have to use a tool to deal with it,” says Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience; a member of the McGovern Institute for Brain Research and MIT’s Center for Brains, Minds, and Machines; and the senior author of the study.
MIT postdoc Vivian Paulun, who is joining the faculty of the University of Wisconsin at Madison this fall, is the lead author of the paper, which appears today in the journal Current Biology. RT Pramod, an MIT postdoc, and Josh Tenenbaum, an MIT professor of brain and cognitive sciences, are also authors of the study.
Stuff vs. things
Decades of brain imaging studies, including early work by Kanwisher, have revealed regions in the brain’s ventral visual pathway that are involved in recognizing the shapes of 3D objects, including an area called the lateral occipital complex (LOC). A region in the brain’s dorsal visual pathway, known as the frontoparietal physics network (FPN), analyzes the physical properties of materials, such as mass or stability.
Although scientists have learned a great deal about how these pathways respond to different features of objects, the vast majority of these studies have been done with solid objects, or “things.”
“Nobody has asked how we perceive what we call ‘stuff’ — that is, liquids or sand, honey, water, all sorts of gooey things. And so we decided to study that,” Paulun says.
These gooey materials behave very differently from solids. They flow rather than bounce, and interacting with them usually requires containers and tools such as spoons. The researchers wondered if these physical features might require the brain to devote specialized regions to interpreting them.
To explore how the brain processes these materials, Paulun used a software program designed for visual effects artists to create more than 100 video clips showing different types of things or stuff interacting with the physical environment. In these videos, the materials could be seen sloshing or tumbling inside a transparent box, being dropped onto another object, or bouncing or flowing down a set of stairs.
The researchers used functional magnetic resonance imaging (fMRI) to scan the visual cortex of people as they watched the videos. They found that both the LOC and the FPN respond to “things” and “stuff,” but that each pathway has distinctive subregions that respond more strongly to one or the other.
“Both the ventral and the dorsal visual pathway seem to have this subdivision, with one part responding more strongly to ‘things,’ and the other responding more strongly to ‘stuff,’” Paulun says. “We haven’t seen this before because nobody has asked that before.”
Roland Fleming, a professor of experimental psychology at Justus Liebig University of Geissen, described the findings as a “major breakthrough in the scientific understanding of how our brains represent the physical properties of our surrounding world.”
“We’ve known the distinction exists for a long time psychologically, but this is the first time that it’s been really mapped onto separate cortical structures in the brain. Now we can investigate the different computations that the distinct brain regions use to process and represent objects and materials,” says Fleming, who was not involved in the study.
Physical interactions
The findings suggest that the brain may have different ways of representing these two categories of material, similar to the artificial physics engines that are used to create video game graphics. These engines usually represent a 3D object as a mesh, while fluids are represented as sets of particles that can be rearranged.
“The interesting hypothesis that we can draw from this is that maybe the brain, similar to artificial game engines, has separate computations for representing and simulating ‘stuff’ and ‘things.’ And that would be something to test in the future,” Paulun says.
The researchers also hypothesize that these regions may have developed to help the brain understand important distinctions that allow it to plan how to interact with the physical world. To further explore this possibility, the researchers plan to study whether the areas involved in processing rigid objects are also active when a brain circuit involved in planning to grasp objects is active.
They also hope to look at whether any of the areas within the FPN correlate with the processing of more specific features of materials, such as the viscosity of liquids or the bounciness of objects. And in the LOC, they plan to study how the brain represents changes in the shape of fluids and deformable substances.
The research was funded by the German Research Foundation, the U.S. National Institutes of Health, and a U.S. National Science Foundation grant to the Center for Brains, Minds, and Machines.
Mapping cells in time and space: New tool reveals a detailed history of tumor growthResearchers developed a tool to recreate cells’ family trees. Comparing cells’ lineages and locations within a tumor provided insights into factors shaping tumor growth.All life is connected in a vast family tree. Every organism exists in relationship to its ancestors, descendants, and cousins, and the path between any two individuals can be traced. The same is true of cells within organisms — each of the trillions of cells in the human body is produced through successive divisions from a fertilized egg, and can all be related to one another through a cellular family tree. In simpler organisms, such as the worm C. elegans, this cellular family tree has been fully mapped, but the cellular family tree of a human is many times larger and more complex.
In the past, MIT professor and Whitehead Institute for Biomedical Research member Jonathan Weissman and other researchers developed lineage tracing methods to track and reconstruct the family trees of cell divisions in model organisms in order to understand more about the relationships between cells and how they assemble into tissues, organs, and — in some cases — tumors. These methods could help to answer many questions about how organisms develop and diseases like cancer are initiated and progress.
Now, Weissman and colleagues have developed an advanced lineage tracing tool that not only captures an accurate family tree of cell divisions, but also combines that with spatial information: identifying where each cell ends up within a tissue. The researchers used their tool, PEtracer, to observe the growth of metastatic tumors in mice. Combining lineage tracing and spatial data provided the researchers with a detailed view of how elements intrinsic to the cancer cells and from their environments influenced tumor growth, as Weissman and postdocs in his lab Luke Koblan, Kathryn Yost, and Pu Zheng, and graduate student William Colgan share in a paper published in the journal Science on July 24.
“Developing this tool required combining diverse skill sets through the sort of ambitious interdisciplinary collaboration that’s only possible at a place like Whitehead Institute,” says Weissman, who is also a Howard Hughes Medical Institute investigator. “Luke came in with an expertise in genetic engineering, Pu in imaging, Katie in cancer biology, and William in computation, but the real key to their success was their ability to work together to build PEtracer.”
“Understanding how cells move in time and space is an important way to look at biology, and here we were able to see both of those things in high resolution. The idea is that by understanding both a cell’s past and where it ends up, you can see how different factors throughout its life influenced its behaviors. In this study, we use these approaches to look at tumor growth, though in principle we can now begin to apply these tools to study other biology of interest, like embryonic development,” Koblan says.
Designing a tool to track cells in space and time
PEtracer tracks cells’ lineages by repeatedly adding short, predetermined codes to the DNA of cells over time. Each piece of code, called a lineage tracing mark, is made up of five bases, the building blocks of DNA. These marks are inserted using a gene editing technology called prime editing, which directly rewrites stretches of DNA with minimal undesired byproducts. Over time, each cell acquires more lineage tracing marks, while also maintaining the marks of its ancestors. The researchers can then compare cells’ combinations of marks to figure out relationships and reconstruct the family tree.
“We used computational modeling to design the tool from first principles, to make sure that it was highly accurate, and compatible with imaging technology. We ran many simulations to land on the optimal parameters for a new lineage tracing tool, and then engineered our system to fit those parameters,” Colgan says.
When the tissue — in this case, a tumor growing in the lung of a mouse — had sufficiently grown, the researchers collected these tissues and used advanced imaging approaches to look at each cell’s lineage relationship to other cells via the lineage tracing marks, along with its spatial position within the imaged tissue and its identity (as determined by the levels of different RNAs expressed in each cell). PEtracer is compatible with both imaging approaches and sequencing methods that capture genetic information from single cells.
“Making it possible to collect and analyze all of this data from the imaging was a large challenge,” Zheng says. “What’s particularly exciting to me is not just that we were able to collect terabytes of data, but that we designed the project to collect data that we knew we could use to answer important questions and drive biological discovery.”
Reconstructing the history of a tumor
Combining the lineage tracing, gene expression, and spatial data let the researchers understand how the tumor grew. They could tell how closely related neighboring cells are and compare their traits. Using this approach, the researchers found that the tumors they were analyzing were made up of four distinct modules, or neighborhoods, of cells.
The tumor cells closest to the lung, the most nutrient-dense region, were the most fit, meaning their lineage history indicated the highest rate of cell division over time. Fitness in cancer cells tends to correlate to how aggressively tumors will grow.
The cells at the “leading edge” of the tumor, the far side from the lung, were more diverse and not as fit. Below the leading edge was a low-oxygen neighborhood of cells that might once have been leading edge cells, now trapped in a less-desirable spot. Between these cells and the lung-adjacent cells was the tumor core, a region with both living and dead cells, as well as cellular debris.
The researchers found that cancer cells across the family tree were equally likely to end up in most of the regions, with the exception of the lung-adjacent region, where a few branches of the family tree dominated. This suggests that the cancer cells’ differing traits were heavily influenced by their environments, or the conditions in their local neighborhoods, rather than their family history. Further evidence of this point was that expression of certain fitness-related genes, such as Fgf1/Fgfbp1, correlated to a cell’s location, rather than its ancestry. However, lung-adjacent cells also had inherited traits that gave them an edge, including expression of the fitness-related gene Cldn4 — showing that family history influenced outcomes as well.
These findings demonstrate how cancer growth is influenced both by factors intrinsic to certain lineages of cancer cells and by environmental factors that shape the behavior of cancer cells exposed to them.
“By looking at so many dimensions of the tumor in concert, we could gain insights that would not have been possible with a more limited view,” Yost says. “Being able to characterize different populations of cells within a tumor will enable researchers to develop therapies that target the most aggressive populations more effectively.”
“Now that we’ve done the hard work of designing the tool, we’re excited to apply it to look at all sorts of questions in health and disease, in embryonic development, and across other model species, with an eye toward understanding important problems in human health,” Koblan says. “The data we collect will also be useful for training AI models of cellular behavior. We’re excited to share this technology with other researchers and see what we all can discover.”
Staff members honored with 2025 Excellence Awards, Collier Medal, and Staff Award for Distinction in ServiceThe MIT community celebrates their fellow staff members’ talent and dedication to the Institute.On Thursday, June 5, 11 individuals and four teams were awarded MIT Excellence Awards — the highest awards for staff at the Institute. Cheers from colleagues holding brightly colored signs and pompoms rang out in Kresge Auditorium in celebration of the honorees. In addition to the Excellence Awards, staff members received the Collier Medal, the Staff Award for Distinction in Service, and the Gordon Y. Billard Award.
The Collier Medal honors the memory of Officer Sean Collier, who gave his life protecting and serving MIT. The medal recognizes an individual or group whose actions demonstrate the importance of community, and whose contributions exceed the boundaries of their profession. The Staff Award for Distinction in Service is presented to an individual whose service results in a positive, lasting impact on the MIT community. The Gordon Y. Billard Award is given to staff or faculty members, or MIT-affiliated individuals, who provide "special service of outstanding merit performed for the Institute."
The 2025 MIT Excellence Award recipients and their award categories are:
Bringing Out the Best
Embracing Inclusion
Innovative Solutions
Outstanding Contributor
Serving Our Community
The 2025 Collier Medal recipient was Kathleen Monagle, associate dean and director of disability and access services, student support, and wellbeing in the Division of Student Life. Monagle oversees a team that supports almost 600 undergraduate, graduate, and MITx students with more than 4,000 accommodations. She works with faculty to ensure those students have the best possible learning experience — both in MIT’s classrooms and online.
This year’s recipient of the 2025 Staff Award for Distinction in Service was Stu Schmill, dean of admissions and student financial services in the Office of the Vice Chancellor. Schmill graduated from MIT in 1986 and has since served the Institute in a variety of roles. His colleagues admire his passion for sharing knowledge; his insight and integrity; and his deep love for MIT’s culture, values, and people.
Three community members were honored with a 2025 Gordon Y. Billard Award.
William "Bill" Cormier, project technician, Department of Mechanical Engineering, School of Engineering
John E. Fernández, professor, Department of Architecture, School of Architecture and Planning; and director of MIT Environmental Solutions Initiative, Office of the Vice President for Research
Tony Lee, coach, MIT Women's Volleyball Club, Student Organizations, Leadership, and Engagement, Division of Student Life
Presenters included President Sally Kornbluth; MIT Chief of Police John DiFava and Deputy Chief Steven DeMarco; Dean of the School of Science Nergis Mavalvala; Vice President for Human Resources Ramona Allen; Executive Vice President and Treasurer Glen Shor; Lincoln Laboratory Assistant Director Justin Brooke; Chancellor Melissa Nobles; and Provost Anantha Chandrakasan.
Visit the MIT Human Resources website for more information about the award recipients, categories, and to view photos and video of the event.
MIT physicists have performed an idealized version of one of the most famous experiments in quantum physics. Their findings demonstrate, with atomic-level precision, the dual yet evasive nature of light. They also happen to confirm that Albert Einstein was wrong about this particular quantum scenario.
The experiment in question is the double-slit experiment, which was first performed in 1801 by the British scholar Thomas Young to show how light behaves as a wave. Today, with the formulation of quantum mechanics, the double-slit experiment is now known for its surprisingly simple demonstration of a head-scratching reality: that light exists as both a particle and a wave. Stranger still, this duality cannot be simultaneously observed. Seeing light in the form of particles instantly obscures its wave-like nature, and vice versa.
The original experiment involved shining a beam of light through two parallel slits in a screen and observing the pattern that formed on a second, faraway screen. One might expect to see two overlapping spots of light, which would imply that light exists as particles, a.k.a. photons, like paintballs that follow a direct path. But instead, the light produces alternating bright and dark stripes on the screen, in an interference pattern similar to what happens when two ripples in a pond meet. This suggests light behaves as a wave. Even weirder, when one tries to measure which slit the light is traveling through, the light suddenly behaves as particles and the interference pattern disappears.
The double-slit experiment is taught today in most high school physics classes as a simple way to illustrate the fundamental principle of quantum mechanics: that all physical objects, including light, are simultaneously particles and waves.
Nearly a century ago, the experiment was at the center of a friendly debate between physicists Albert Einstein and Niels Bohr. In 1927, Einstein argued that a photon particle should pass through just one of the two slits and in the process generate a slight force on that slit, like a bird rustling a leaf as it flies by. He proposed that one could detect such a force while also observing an interference pattern, thereby catching light’s particle and wave nature at the same time. In response, Bohr applied the quantum mechanical uncertainty principle and showed that the detection of the photon’s path would wash out the interference pattern.
Scientists have since carried out multiple versions of the double-slit experiment, and they have all, to various degrees, confirmed the validity of the quantum theory formulated by Bohr. Now, MIT physicists have performed the most “idealized” version of the double-slit experiment to date. Their version strips down the experiment to its quantum essentials. They used individual atoms as slits, and used weak beams of light so that each atom scattered at most one photon. By preparing the atoms in different quantum states, they were able to modify what information the atoms obtained about the path of the photons. The researchers thus confirmed the predictions of quantum theory: The more information was obtained about the path (i.e. the particle nature) of light, the lower the visibility of the interference pattern was.
They demonstrated what Einstein got wrong. Whenever an atom is “rustled” by a passing photon, the wave interference is diminished.
“Einstein and Bohr would have never thought that this is possible, to perform such an experiment with single atoms and single photons,” says Wolfgang Ketterle, the John D. MacArthur Professor of Physics and leader of the MIT team. “What we have done is an idealized Gedanken experiment.”
Their results appear in the journal Physical Review Letters. Ketterle’s MIT co-authors include first author Vitaly Fedoseev, Hanzhen Lin, Yu-Kun Lu, Yoo Kyung Lee, and Jiahao Lyu, who all are affiliated with MIT’s Department of Physics, the Research Laboratory of Electronics, and the MIT-Harvard Center for Ultracold Atoms.
Cold confinement
Ketterle’s group at MIT experiments with atoms and molecules that they super-cool to temperatures just above absolute zero and arrange in configurations that they confine with laser light. Within these ultracold, carefully tuned clouds, exotic phenomena that only occur at the quantum, single-atom scale can emerge.
In a recent experiment, the team was investigating a seemingly unrelated question, studying how light scattering can reveal the properties of materials built from ultracold atoms.
“We realized we can quantify the degree to which this scattering process is like a particle or a wave, and we quickly realized we can apply this new method to realize this famous experiment in a very idealized way,” Fedoseev says.
In their new study, the team worked with more than 10,000 atoms, which they cooled to microkelvin temperatures. They used an array of laser beams to arrange the frozen atoms into an evenly spaced, crystal-like lattice configuration. In this arrangement, each atom is far enough away from any other atom that each can effectively be considered a single, isolated and identical atom. And 10,000 such atoms can produce a signal that is more easily detected, compared to a single atom or two.
The group reasoned that with this arrangement, they might shine a weak beam of light through the atoms and observe how a single photon scatters off two adjacent atoms, as a wave or a particle. This would be similar to how, in the original double-slit experiment, light passes through two slits.
“What we have done can be regarded as a new variant to the double-slit experiment,” Ketterle says. “These single atoms are like the smallest slits you could possibly build.”
Tuning fuzz
Working at the level of single photons required repeating the experiment many times and using an ultrasensitive detector to record the pattern of light scattered off the atoms. From the intensity of the detected light, the researchers could directly infer whether the light behaved as a particle or a wave.
They were particularly interested in the situation where half the photons they sent in behaved as waves, and half behaved as particles. They achieved this by using a method to tune the probability that a photon will appear as a wave versus a particle, by adjusting an atom’s “fuzziness,” or the certainty of its location. In their experiment, each of the 10,000 atoms is held in place by laser light that can be adjusted to tighten or loosen the light’s hold. The more loosely an atom is held, the fuzzier, or more “spatially extensive,” it appears. The fuzzier atom rustles more easily and records the path of the photon. Therefore, in tuning up an atom’s fuzziness, researchers can increase the probability that a photon will exhibit particle-like behavior. Their observations were in full agreement with the theoretical description.
Springs away
In their experiment, the group tested Einstein’s idea about how to detect the path of the photon. Conceptually, if each slit were cut into an extremely thin sheet of paper that was suspended in the air by a spring, a photon passing through one slit should shake the corresponding spring by a certain degree that would be a signal of the photon’s particle nature. In previous realizations of the double slit experiment, physicists have incorporated such a spring-like ingredient, and the spring played a major role in describing the photon’s dual nature.
But Ketterle and his colleagues were able to perform the experiment without the proverbial springs. The team’s cloud of atoms is initially held in place by laser light, similar to Einstein’s conception of a slit suspended by a spring. The researchers reasoned that if they were to do away with their “spring,” and observe exactly the same phenomenon, then it would show that the spring has no effect on a photon’s wave/particle duality.
This, too, was what they found. Over multiple runs, they turned off the spring-like laser holding the atoms in place and then quickly took a measurement in a millionth of a second, before the atoms became more fuzzy and eventually fell down due to gravity. In this tiny amount of time, the atoms were effectively floating in free space. In this spring-free scenario, the team observed the same phenomenon: A photon’s wave and particle nature could not be observed simultaneously.
“In many descriptions, the springs play a major role. But we show, no, the springs do not matter here; what matters is only the fuzziness of the atoms,” Fedoseev says. “Therefore, one has to use a more profound description, which uses quantum correlations between photons and atoms.”
The researchers note that the year 2025 has been declared by the United Nations as the International Year of Quantum Science and Technology, celebrating the formulation of quantum mechanics 100 years ago. The discussion between Bohr and Einstein about the double-slit experiment took place only two years later.
“It’s a wonderful coincidence that we could help clarify this historic controversy in the same year we celebrate quantum physics,” says co-author Lee.
This work was supported, in part, by the National Science Foundation, the U.S. Department of Defense, and the Gordon and Betty Moore Foundation.
New machine-learning application to help researchers predict chemical propertiesChemXploreML makes advanced chemical predictions easier and faster — without requiring deep programming skills.One of the shared, fundamental goals of most chemistry researchers is the need to predict a molecule’s properties, such as its boiling or melting point. Once researchers can pinpoint that prediction, they’re able to move forward with their work yielding discoveries that lead to medicines, materials, and more. Historically, however, the traditional methods of unveiling these predictions are associated with a significant cost — expending time and wear and tear on equipment, in addition to funds.
Enter a branch of artificial intelligence known as machine learning (ML). ML has lessened the burden of molecule property prediction to a degree, but the advanced tools that most effectively expedite the process — by learning from existing data to make rapid predictions for new molecules — require the user to have a significant level of programming expertise. This creates an accessibility barrier for many chemists, who may not have the significant computational proficiency required to navigate the prediction pipeline.
To alleviate this challenge, researchers in the McGuire Research Group at MIT have created ChemXploreML, a user-friendly desktop app that helps chemists make these critical predictions without requiring advanced programming skills. Freely available, easy to download, and functional on mainstream platforms, this app is also built to operate entirely offline, which helps keep research data proprietary. The exciting new technology is outlined in an article published recently in the Journal of Chemical Information and Modeling.
One specific hurdle in chemical machine learning is translating molecular structures into a numerical language that computers can understand. ChemXploreML automates this complex process with powerful, built-in "molecular embedders" that transform chemical structures into informative numerical vectors. Next, the software implements state-of-the-art algorithms to identify patterns and accurately predict molecular properties like boiling and melting points, all through an intuitive, interactive graphical interface.
"The goal of ChemXploreML is to democratize the use of machine learning in the chemical sciences,” says Aravindh Nivas Marimuthu, a postdoc in the McGuire Group and lead author of the article. “By creating an intuitive, powerful, and offline-capable desktop application, we are putting state-of-the-art predictive modeling directly into the hands of chemists, regardless of their programming background. This work not only accelerates the search for new drugs and materials by making the screening process faster and cheaper, but its flexible design also opens doors for future innovations.”
ChemXploreML is designed to to evolve over time, so as future techniques and algorithms are developed, they can be seamlessly integrated into the app, ensuring that researchers are always able to access and implement the most up-to-date methods. The application was tested on five key molecular properties of organic compounds — melting point, boiling point, vapor pressure, critical temperature, and critical pressure — and achieved high accuracy scores of up to 93 percent for the critical temperature. The researchers also demonstrated that a new, more compact method of representing molecules (VICGAE) was nearly as accurate as standard methods, such as Mol2Vec, but was up to 10 times faster.
“We envision a future where any researcher can easily customize and apply machine learning to solve unique challenges, from developing sustainable materials to exploring the complex chemistry of interstellar space,” says Marimuthu. Joining him on the paper is senior author and Class of 1943 Career Development Assistant Professor of Chemistry Brett McGuire.
Astronomers discover star-shredding black holes hiding in dusty galaxiesUnlike active galaxies that constantly pull in surrounding material, these black holes lie dormant, waking briefly to feast on a passing star.Astronomers at MIT, Columbia University, and elsewhere have used NASA’s James Webb Space Telescope (JWST) to peer through the dust of nearby galaxies and into the aftermath of a black hole’s stellar feast.
In a study appearing today in Astrophysical Journal Letters, the researchers report that for the first time, JWST has observed several tidal disruption events — instances when a galaxy’s central black hole draws in a nearby star and whips up tidal forces that tear the star to shreds, giving off an enormous burst of energy in the process.
Scientists have observed about 100 tidal disruption events (TDEs) since the 1990s, mostly as X-ray or optical light that flashes across relatively dust-free galaxies. But as MIT researchers recently reported, there may be many more star-shredding events in the universe that are “hiding” in dustier, gas-veiled galaxies.
In their previous work, the team found that most of the X-ray and optical light that a TDE gives off can be obscured by a galaxy’s dust, and therefore can go unseen by traditional X-ray and optical telescopes. But that same burst of light can heat up the surrounding dust and generate a new signal, in the form of infrared light.
Now, the same researchers have used JWST — the world’s most powerful infrared detector — to study signals from four dusty galaxies where they suspect tidal disruption events have occurred. Within the dust, JWST detected clear fingerprints of black hole accretion, a process by which material, such as stellar debris, circles and eventually falls into a black hole. The telescope also detected patterns that are strikingly different from the dust that surrounds active galaxies, where the central black hole is constantly pulling in surrounding material.
Together, the observations confirm that a tidal disruption event did indeed occur in each of the four galaxies. What’s more, the researchers conclude that the four events were products of not active black holes but rather dormant ones, which experienced little to no activity until a star happened to pass by.
The new results highlight JWST’s potential to study in detail otherwise hidden tidal disruption events. They are also helping scientists to reveal key differences in the environments around active versus dormant black holes.
“These are the first JWST observations of tidal disruption events, and they look nothing like what we’ve ever seen before,” says lead author Megan Masterson, a graduate student in MIT’s Kavli Institute for Astrophysics and Space Research. “We’ve learned these are indeed powered by black hole accretion, and they don’t look like environments around normal active black holes. The fact that we’re now able to study what that dormant black hole environment actually looks like is an exciting aspect.”
The study’s MIT authors include Christos Panagiotou, Erin Kara, Anna-Christina Eilers, along with Kishalay De of Columbia University and collaborators from multiple other institutions.
Seeing the light
The new study expands on the team’s previous work using another infrared detector — NASA’s Near-Earth Object Wide-field Infrared Survey Explorer (NEOWISE) mission. Using an algorithm developed by co-author Kishalay De of Columbia University, the team searched through a decade’s worth of data from the telescope, looking for infrared “transients,” or short peaks of infrared activity from otherwise quiet galaxies that could be signals of a black hole briefly waking up and feasting on a passing star. That search unearthed about a dozen signals that the group determined were likely produced by a tidal disruption event.
“With that study, we found these 12 sources that look just like TDEs,” Masterson says. “We made a lot of arguments about how the signals were very energetic, and the galaxies didn’t look like they were active before, so the signals must have been from a sudden TDE. But except for these little pieces, there was no direct evidence.”
With the much more sensitive capabilities of JWST, the researchers hoped to discern key “spectral lines,” or infrared light at specific wavelengths, that would be clear fingerprints of conditions associated with a tidal disruption event.
“With NEOWISE, it’s as if our eyes could only see red light or blue light, whereas with JWST, we’re seeing the full rainbow,” Masterson says.
A Bonafide signal
In their new work, the group looked specifically for a peak in infrared, that could only be produced by black hole accretion — a process by which material is drawn toward a black hole in a circulating disk of gas. This disk produces an enormous amount of radiation that is so intense that it can kick out electrons from individual atoms. In particular, such accretion processes can blast several electrons out from atoms of neon, and the resulting ion can transition, releasing infrared radiation at a very specific wavelength that JWST can detect.
“There’s nothing else in the universe that can excite this gas to these energies, except for black hole accretion,” Masterson says.
The researchers searched for this smoking-gun signal in four of the 12 TDE candidates they previously identified. The four signals include: the closest tidal disruption event detected to date, located in a galaxy some 130 million light years away; a TDE that also exhibits a burst of X-ray light; a signal that may have been produced by gas circulating at incredibly high speeds around a central black hole; and a signal that also included an optical flash, which scientists had previously suspected to be a supernova, or the collapse of a dying star, rather than tidal disruption event.
“These four signals were as close as we could get to a sure thing,” Masterson says. “But the JWST data helped us say definitively these are bonafide TDEs.”
When the team pointed JWST toward the galaxies of each of the four signals, in a program designed by De, they observed that the telltale spectral lines showed up in all four sources. These measurements confirmed that black hole accretion occurred in all four galaxies. But the question remained: Was this accretion a temporary feature, triggered by a tidal disruption and a black hole that briefly woke up to feast on a passing star? Or was this accretion a more permanent trait of “active” black holes that are always on? In the case of the latter, it would be less likely that a tidal disruption event had occurred.
To differentiate between the two possibilities, the team used the JWST data to detect another wavelength of infrared light, which indicates the presence of silicates, or dust in the galaxy. They then mapped this dust in each of the four galaxies and compared the patterns to those of active galaxies, which are known to harbor clumpy, donut-shaped dust clouds around the central black hole. Masterson observed that all four sources showed very different patterns compared to typical active galaxies, suggesting that the black hole at the center of each of the galaxies is not normally active, but dormant. If an accretion disk formed around such a black hole, the researchers conclude that it must have been a result of a tidal disruption event.
“Together, these observations say the only thing these flares could be are TDEs,” Masterson says.
She and her collaborators plan to uncover many more previously hidden tidal disruption events, with NEOWISE, JWST, and other infrared telescopes. With enough detections, they say TDEs can serve as effective probes of black hole properties. For instance, how much of a star is shredded, and how fast its debris is accreted and consumed, can reveal fundamental properties of a black hole, such as how massive it is and how fast it spins.
“The actual process of a black hole gobbling down all that stellar material takes a long time,” Masterson says. “It’s not an instantaneous process. And hopefully we can start to probe how long that process takes and what that environment looks like. No one knows because we just started discovering and studying these events.”
This research was supported, in part, by NASA.
What do we owe each other? A new class teaches MIT students how to navigate a fast-changing world with a moral compass.MIT equips students with the tools to advance science and engineering — but a new class aims to ensure they also develop their own values and learn how to navigate conflicting viewpoints.
Offered as a pilot this past spring, the multidisciplinary class 21.01 (Compass Course: Love, Death, and Taxes: How to Think — and Talk to Others — About Being Human), invites students to wrestle with difficult questions like:
The class is part of the Compass Initiative, which is led by faculty from across the MIT School of Humanities, Arts, and Social Sciences (SHASS).
Lily L. Tsai, Ford Professor of Political Science and lead faculty for Compass, says the new course is meant to help students use the humanities and social sciences as their guide to thinking about the kind of humans they want to be and what kind of society they want to help create.
"At MIT, we're some of the people who are creating the technologies that are accelerating change and leading to more unpredictability in the world. We have a special responsibility to envision and reimagine a moral and civic education that enables people to navigate it," says Tsai.
The course is the result of a multi-year collaboration involving over 30 faculty from 19 departments, ranging from Philosophy and Literature to Brain and Cognitive Sciences and Electrical Engineering and Computer Science, all led by a core team of 14 faculty from SHASS and a student advisory board.
During its initial run in the spring, Compass followed an arc that began with students investigating questions of value. Early in the semester, students explored what makes a genius, using Beethoven's "Symphony No. 9" as a case study, accompanied by lectures from Emily Richmond Pollock, associate professor of music, and a podcast conversation with Larry Guth, professor of mathematics, and David Kaiser, professor of physics and science, technology, and society.
Students then grappled with the concept of a merit-based society by digging into the example of the imperial Chinese civil service exam, guided by professor of history Tristan Brown. Next, they questioned what humans really know to be true by examining the universality of language through lectures by professor of linguistics Adam Albright, and the philosophy of truth and knowledge through lectures by professor of philosophy Alex Byrne.
The semester ended with challenging debates about what humans owe one another, including a class designed by Nobel laureate and professor of economics Esther Duflo on taxation and climate burdens.
More than anything, Tsai says, she hopes that Compass prepares students to navigate dorm hallways, the family Thanksgiving table, or future labs or boardroom tables, and learn how to express opinions and actively listen to others with whom they may disagree — all without canceling one another.
The class takes a "flipped classroom" approach: Students watch recorded lectures at home and come to class prepared for discussion and debate. Each section is co-taught by two faculty members, combining disciplines and perspectives.
Second-year mechanical engineering major Kayode Dada signed up because it fulfilled a communications-intensive requirement and offered cross-departmental exposure. But Compass ultimately became more than that to him. "College isn't just about learning science stuff — it's also about how we grow as people," he says. Dada was assigned to a section co-taught by Tsai and professor of literature Arthur Bahr.
Forming a social contract
In the first week, students draft a Rousseau-inspired social compact and learn firsthand how to build a classroom community. "We knew these were deep topics," Dada says. "To get the most out of the class, we had to open up, respect each other, and keep conversations confidential."
One early exercise was especially impactful. After watching lectures by Ford Professor of Philosophy and Women’s and Gender Studies Sally Haslanger on value, students were asked to draw a map representing their values, with arrows pointing from ones that were more instrumental to ones that were fundamental.
At first, Dada felt stuck. Growing up in Kentucky, the son of a Nigerian immigrant who had dreamed of attending MIT himself, Dada had focused for years on gaining admission to the Institute. "I thought getting into MIT would make me feel fulfilled," he admits. "But once I got here, I realized the work alone wasn't enough."
The values exercise helped him reorient. He identified practicing Christianity, hard work, helping others, and contributing to society as central to his belief system. The exercise influenced Dada, leading him to choose to volunteer at a robotics camp for kids in Louisville to share his MIT education with others.
Who governs science?
Later in the semester, Dada was animatedly representing a figure whose views contradicted his own: James D. Watson, the Nobel Prize winner who co-discovered DNA's structure — and is also a controversial figure.
That week, each student had been assigned a persona from a 1976 Cambridge City Council hearing debating recombinant DNA research. The class, designed by Associate Professor Robin Scheffler, was investigating the question: Who governs science — scientists, the government, those who fund research, or the public?
They revisited a real-life debate around recombinant DNA research and the dangers for biological weapons development and other threats to the public that citizens of that time believed it posed when carried out in MIT and Harvard University labs. Pioneered in the 1970s, the technique involved the splicing of genes related to the E. coli bacterium. In the Compass classroom, students argued different sides from their personas: banning the research, moving labs outside city limits, or proceeding without government interference.
Dada notes how faculty intentionally seeded conflicting viewpoints. "It taught me how to negotiate with someone who has different values and come to a resolution that respects everyone involved," he says. "That's something I want to keep exploring."
When Dada closed his presentation with frantically-Googled sentimental music piped unexpectedly from his phone, his classmates laughed in appreciation. The atmosphere was more intimate than academic — an ethos Tsai hoped to cultivate. "They really built intellectual relationships based on trust," she says. "There was a lot of laughter. They took joy in disagreeing and debating."
Changing opinions
First-year student-athlete Shannon Cordle, who is majoring in mechanical engineering, didn't know what to expect from Compass. Since it was new, there were no student reviews. What stood out to her was the grading system: 15 percent of the final grade is based on a rubric each student created for themselves.
Cordle's goal was to become more comfortable expressing an opinion — even before she's fully formed it. "It's easy to stay quiet when you're unsure," she says. "Compass helped me practice speaking up and being willing to be wrong, because that's how you learn."
One week, the class debated whether a meritocracy creates a just society — an especially relevant topic at MIT, given its famously selective admissions process.
Students were able to pick their stance beforehand, and then invited to change it as they gained more perspectives during the debate.
"This helps students grasp not only the flaws in another viewpoint, but also how to strengthen their arguments," Tsai says.
Cordle, who hopes to go into prosthetics, views her future field as representing the perfect balance between creativity and ethics. "The humanities challenge how we view our fields as scientists and engineers," she says.
A compass helps travelers find their way — but it's most useful when they need to reorient and change direction. In that spirit, Compass prepares students not just to ask big questions, but to keep asking — and keep adapting — as their lives and careers evolve.
“Bringing these unexpected class elements together with students and faculty generated magical alchemy — a kind of transformation that we didn't even know we could create,” Tsai says.
In addition to the class, the MIT Compass Podcast engages in these fundamental questions with guests from across the MIT schools of Science and Engineering. There are also plans to adapt the residential version of this class for online learners on MITx.
In addition to philanthropic support from MIT Corporation life member emeritus Ray Stata '57, the initiative is supported by the Office of the Vice Chancellor and the MIT Human Insight Collaborative's SHASS Education Innovation Fund, which promotes new, transformative educational approaches in SHASS fields.
Connect or reject: Extensive rewiring builds binocular vision in the brainA first-of-its-kind study in mice shows neurons add and shed synapses at a frenzied pace during development to integrate visual signals from the two eyes.Scientists have long known that the brain’s visual system isn’t fully hardwired from the start — it becomes refined by what babies see — but the authors of a new MIT study still weren’t prepared for the degree of rewiring they observed when they took a first-ever look at the process in mice as it happened in real-time.
As the researchers in The Picower Institute for Learning and Memory tracked hundreds of “spine” structures housing individual network connections, or “synapses,” on the dendrite branches of neurons in the visual cortex over 10 days, they saw that only 40 percent of the ones that started the process survived. Refining binocular vision (integrating input from both eyes) required numerous additions and removals of spines along the dendrites to establish an eventual set of connections.
Former graduate student Katya Tsimring led the study, published this month in Nature Communications, which the team says is the first in which scientists tracked the same connections all the way through the “critical period,” when binocular vision becomes refined.
“What Katya was able to do is to image the same dendrites on the same neurons repeatedly over 10 days in the same live mouse through a critical period of development, to ask, what happens to the synapses or spines on them?,” says senior author Mriganka Sur, the Paul and Lilah Newton Professor in the Picower Institute and MIT’s Department of Brain and Cognitive Sciences. “We were surprised by how much change there is.”
Extensive turnover
In the experiments, young mice watched as black-and-white gratings with lines of specific orientations and directions of movement drifted across their field of view. At the same time, the scientists observed both the structure and activity of the neurons’ main body (or “soma”) and of the spines along their dendrites. By tracking the structure of 793 dendritic spines on 14 neurons at roughly Day 1, Day 5 and Day 10 of the critical period, they could quantify the addition and loss of the spines, and therefore the synaptic connections they housed. And by tracking their activity at the same time, they could quantify the visual information the neurons received at each synaptic connection. For example, a spine might respond to one specific orientation or direction of grating, several orientations, or might not respond at all. Finally, by relating a spine’s structural changes across the critical period to its activity, they sought to uncover the process by which synaptic turnover refined binocular vision.
Structurally, the researchers saw that 32 percent of the spines evident on Day 1 were gone by Day 5, and that 24 percent of the spines apparent on Day 5 had been added since Day 1. The period between Day 5 and Day 10 showed similar turnover: 27 percent were eliminated, but 24 percent were added. Overall, only 40 percent of the spines seen on Day 1 were still there on Day 10.
Meanwhile, only four of the 13 neurons they were tracking that responded to visual stimuli still responded on Day 10. The scientists don’t know for sure why the other nine stopped responding, at least to the stimuli they once responded to, but it’s likely they now served a different function.
What are the rules?
Having beheld this extensive wiring and rewiring, the scientists then asked what entitled some spines to survive over the 10-day critical period.
Previous studies have shown that the first inputs to reach binocular visual cortex neurons are from the “contralateral” eye on the opposite side of the head (so in the left hemisphere, the right eye’s inputs get there first), Sur says. These inputs drive a neuron’s soma to respond to specific visual properties such as the orientation of a line — for instance, a 45-degree diagonal. By the time the critical period starts, inputs from the “ipsilateral” eye on the same side of the head begin joining the race to visual cortex neurons, enabling some to become binocular.
It’s no accident that many visual cortex neurons are tuned to lines of different directions in the field of view, Sur says.
“The world is made up of oriented line segments,” Sur notes. “They may be long line segments; they may be short line segments. But the world is not just amorphous globs with hazy boundaries. Objects in the world — trees, the ground, horizons, blades of grass, tables, chairs — are bounded by little line segments.”
Because the researchers were tracking activity at the spines, they could see how often they were active and what orientation triggered that activity. As the data accumulated, they saw that spines were more likely to endure if (a) they were more active, and (b) they responded to the same orientation as the one the soma preferred. Notably, spines that responded to both eyes were more active than spines that responded to just one, meaning binocular spines were more likely to survive than non-binocular ones.
“This observation provides compelling evidence for the ‘use it or lose it’ hypothesis,” says Tsimring. “The more active a spine was, the more likely it was to be retained during development.”
The researchers also noticed another trend. Across the 10 days, clusters emerged along the dendrites in which neighboring spines were increasingly likely to be active at the same time. Other studies have shown that by clustering together, spines are able to combine their activity to be greater than they would be in isolation.
By these rules, over the course of the critical period, neurons apparently refined their role in binocular vision by selectively retaining inputs that reinforced their budding orientation preferences, both via their volume of activity (a synaptic property called “Hebbian plasticity”) and their correlation with their neighbors (a property called “heterosynaptic plasticity”). To confirm that these rules were enough to produce the outcomes they were seeing under the microscope, they built a computer model of a neuron, and indeed the model recapitulated the same trends as what they saw in the mice.
“Both mechanisms are necessary during the critical period to drive the turnover of spines that are misaligned to the soma and to neighboring spine pairs,” the researchers wrote, “which ultimately leads to refinement of [binocular] responses such as orientation matching between the two eyes.”
In addition to Tsimring and Sur, the paper’s other authors are Kyle Jenks, Claudia Cusseddu, Greggory Heller, Jacque Pak Kan Ip, and Julijana Gjorgjieva. Funding sources for the research came from the National Institutes of Health, The Picower Institute for Learning and Memory, and the Freedom Together Foundation.
Professor Emeritus Daniel Kleppner, highly influential atomic physicist, dies at 92The “godfather of Bose-Einstein condensation” and MIT faculty member for 37 years led research into atomic, molecular, and optical physics that led to GPS and quantum computing.Daniel Kleppner, the Lester Wolfe Professor Emeritus of Physics at MIT whose work in experimental atomic physics made an immense mark on the field, died on June 16 at the age of 92, in Palo Alto, California.
Kleppner’s varied research examined the interactions of atoms with static electric and magnetic fields and radiation. His work included creating precision measurements with hydrogen masers, including the co-invention of the hydrogen maser atomic clock; his research into the physics of Rydberg atoms and cavity quantum electrodynamics; and his pioneering work in Bose-Einstein condensation (BEC).
Kleppner, who retired in 2003 after 37 years at MIT, was a highly literate and articulate scientist whose exacting research and communication skills helped set the direction of modern atomic, molecular, and optical (AMO) physics. From 1987 to 2000, he was associate director of the MIT Research Laboratory of Electronics (RLE), and served as interim director in 2001. He also co-founded the MIT-Harvard Center for Ultracold Atoms (CUA) in 2000, where he was co-director until 2006.
While he was never awarded a Nobel Prize, Kleppner's impact on the field of atomic physics and quantum optics, and his generous mentorship, enabled the Nobel achievements of many others. His patient and exacting pursuit of discovery led to basic research insights that led to major achievements. His extensive research into the tiny atom provided the fundamental knowledge necessary for the huge: the eventual development of groundbreaking technologies such as the global positioning system (GPS), magnetic resonance imaging (MRI), and quantum computing.
“He was a leader in the department, and a leader in the American Physical Society,” says Wolfgang Ketterle, the John D. MacArthur Professor of Physics at MIT and a 2001 Nobel laureate. “He was a statesman of science. He was this eloquent person, this master of words who could express things in memorable ways, and at the same time he has this sense of humility.”
“Dan Kleppner was a giant in the area of AMO physics, and in science more broadly,” says John Doyle PhD ’91, Harvard Quantum Initiative co-director and Kleppner advisee who helped Kleppner create the Bose-Einstein condensate from atomic hydrogen. “Perhaps his most impactful legacy is leading a culture of respect and supportive community actions that all scientists in the area of AMO physics enjoy today. Not only did his science lay the path for current research directions, his kindness, erudition, and commitment to community — and community service — are now ever-expanding waves that guide AMO physics. He was a mentor and friend to me."
Kleppner’s daughter Sofie Kleppner notes: “People who worked on early lasers never imagined we would be scanning groceries at the checkout counter. When they developed the hydrogen maser, they were a bunch of nerdy people who really wanted to understand Einstein’s theory of relativity. This was the basis for GPS, this is how our flights run on time. Our dad was convinced that basic research today could lead to all sorts of valuable things down the road.”
Early life and career
Born in Manhattan on Dec. 16, 1932, Kleppner was the son of Vienna native and advertising agency founder Otto Kleppner, who wrote the best-selling book “Advertising Procedure.” His mother, Beatrice (Taub) Kleppner, grew up in New Jersey and was a graduate of Barnard College. She helped with Otto’s manuscripts. Daniel Kleppner was the second of three siblings; his brother, the late Adam Kleppner, was a professor of mathematics at the University of Maryland, and his sister, Susan Folkman, was a research psychologist at the University of California at Berkeley.
“As a teenager, I just liked building things,” Kleppner once said. “And that turned out to be very useful when I went on to become an experimental physicist. I had a crystal radio, so I could listen to the radio over earphones. And the thought that the signals were just coming out of the atmosphere, I remember thinking: totally remarkable. And actually, I still do. In fact, the idea of the electromagnetic field, although it’s very well understood in physics, always seems like a miracle to me.”
In high school, he was inspired by his physics teacher, Arthur Hussey, who allowed Kleppner to work all hours in the labs. “There was one time when the whole school was having a pep rally, and I wasn’t that interested in cheering football, so I stayed up and worked in the lab, and the high school principal noticed that I was in there and called me in and gave me a dressing down for lack of school spirit.”
He didn’t care. Hussey talked with Kleppner about quantum mechanics, and “that sort of put a bee in my bonnet on that,” and taught him a little calculus. “In those years, physics was extremely fashionable. These were the post-war years, and physicists were considered heroes for having brought the war to conclusion with the atom bomb, and … the development of radar.”
He knew by then that he was “destined to spend a life in physics,” he said in a video interview for InfiniteMIT. “It was an easy era to become delighted by physics, and I was.”
Studying physics at Williams College, he was drawn to Albert Einstein’s theory of general relativity. He built a programmable machine that he called a forerunner of cybernetics. Williams also instilled in him a lifelong love of literature, and he almost became an English major. However, he didn’t appreciate what he called the school fraternities’ “playboy” and “anti-intellectual” atmosphere, and worked to graduate quickly within three years, in 1953.
He deferred his acceptance to Harvard University with a Fulbright Fellowship to Cambridge University, where he met the young physicist Kenneth Smith, whose research was with atomic beam resonance. Smith introduced him to the book “Nuclear Moments,” by Harvard professor Norman Ramsey, and presented a proposal by Ramsey’s advisor I.I. Rabi, who invented a technique that could make an atomic clock so precise “that you could see the effect of gravity on time that Einstein predicted,” said Kleppner.
“I found that utterly astonishing,” Kleppner noted. “The thought that gravity affects time: I had a hard time just visualizing that.”
When Kleppner wandered Harvard’s halls in 1955, he was excited to see a door with Ramsey’s name on it. He was interested in Ramsey’s research on molecular beam magnetic resonance, atomic clocks, and precision measurements. “Fortunately, I came along at a time when he had an opening in his research group,” Kleppner recalled.
A new atomic clock
As Kleppner’s advisor, Ramsey encouraged him to create a new type of atomic clock, believing that cesium and ammonia masers, a technology of amplified microwaves, were not precise enough to measure the effect of gravity on time.
Kleppner’s thesis was on using the concepts behind an ammonia maser to advance toward a hydrogen maser, which uses the natural microwave frequency of hydrogen atoms and amplifies it through stimulated emission of radiation. Kleppner discovered that coherent cesium atoms can bounce from properly prepared surfaces without losing their coherence.
After his 1959 PhD, Kleppner stayed on at Harvard, becoming an assistant professor in 1962.
Kleppner’s research on hydrogen led to a method to keep hydrogen atoms locked in a glass container for study over a longer period of time. The result, featuring hydrogen atoms bouncing within a microwave cavity, is used to stabilize the frequency of a clock to a precision better than one microsecond in a year.
In 1960, he and Ramsey successfully created a new atomic clock whose significant stability could confirm the minute effects of gravity on time, as predicted by Einstein’s theory of general relativity.
The current generation of optical clocks “are good enough to see the gravitational red shift for a few centimeters in height, so that’s quite extraordinary, and it’s had an extraordinary result,” said Kleppner. “We got to rethink just what we mean by time.”
While the hydrogen maser did verify Einstein’s conjecture about time and gravity, it took more than a decade before being widely used, at first by radio astronomers. Today, atomic clocks such as the hydrogen maser are used in applications requiring high short-term stability, such as the synchronization of ground-based timing systems that track global positioning satellites, for timekeeping and communication by naval observatories to maintain a precise and stable time reference known as UTC (USNO); very long-baseline microwave interferometry (VLBI) that enables astronomers to achieve very high resolution and study distant radio sources, including black holes; and, indirectly, in magnetic resonance imaging.
“When we first set out to make these atomic clocks, our goals were about the least practical you can think of,” Kleppner said in an interview with the MIT Physics Department. “From being a rather abstract idea that you’d like to somehow witness, it becomes a very urgent thing for the conduct of human affairs.”
Ramsey went on to win the Nobel Prize in Physics in 1989 for his work on the separated oscillatory fields method and its application in the hydrogen maser and atomic clocks.
MIT, ultracold gases, and BEC advancements
Kleppner figured he wouldn’t get tenure at Harvard, “because no matter how generous and good-spirited Norman was, he casts a long shadow, and it was good for me to be at just the right distance. When I came to MIT, I had a pallet of experiments that I wanted to pursue, and some ideas about teaching that I wanted to pursue, and the transition was very simple.”
Kleppner joined the Institute in 1966, and his Harvard PhD student (and current MIT professor post-tenure) David Pritchard followed him, to work on scattering experiments: Kleppner worked with pulsed lasers, and Pritchard with continuous-wave (CW) lasers.
“He was young, he was verbal, and he seemed to have new ideas about what to do,” says Pritchard. “We foresaw how important lasers would become. For a long time, it was just Dan and myself. That was actually the era in which lasers took over. Dan and I started off, we both got into lasers, and he did Rydberg atoms, and I did collisions and spectroscopy of weakly bound molecules and two-photon spectroscopy.”
Kleppner led the tiny MIT Atomic Physics Group to eventually become the US News and World Report’s No. 1 nationally ranked atomic physics group in 2012. “Dan was the leader on this,” recalled Pritchard. “To start from non-tenure and build it into the number-one ranked department in your subfield, that’s a lifetime achievement.”
The group became what Pritchard called “the supergroup” of laser developers that included Charles Townes, who won the Nobel for his work; Ali Javan, who established a major laser research center at MIT; and Dolly Shibles. Pritchard joined the faculty in 1970, and Ketterle joined in 1990 as his postdoc. “We were pioneers, and the result was of course that our total group had a bigger impact.”
“He’s not just the father figure of the field, he is my scientific father,” says Pritchard. “When I’m writing something and it’s not going very well, I would sort of think to myself, ‘What would Dan say? What would he advise you?”
With MIT low-temperature physicist Tom Greytak ’63, PhD ’67, Kleppner developed two revolutionary techniques — magnetic trapping and evaporative cooling. When the scientific community combined these techniques with laser cooling, atomic physics went into a major new direction.
In 1995, a group of researchers, led by Kleppner's former students Eric Cornell PhD ’90 and Carl Weiman ’73, made a BEC using rubidium atoms, and Ketterle succeeded with sodium atoms. For this achievement, they received the 2001 Nobel Prize in Physics. Kleppner called BEC “the most exciting advance in atomic physics for decades.”
At a conference on BEC in 1996, Ketterle recalls Kleppner describing his own contributions: “'I feel like Moses, who showed his people the Holy Land, but he never reached it himself.' This was exactly what Dan did. He showed us the Holy Land of Bose-Einstein condensation. He showed us what is possible … He was the godfather of Bose-Einstein condensation.”
But he did reach the Holy Land. In 1998, when only a few groups had been able to create BECs, Kleppner and Greytak realized a hydrogen BEC. When he presented their work at the summer school in Varenna soon afterward, he received a long-lasting standing ovation — after 20 years of hard work, he had reached his goal.
“It is an irony that when Dan started this work, hydrogen was the only choice to reach the low temperatures for BEC,” says Ketterle. But in the end, it turned out that hydrogen has special properties that made it much harder to reach BEC than with other atoms.
Rydberg atoms
In 1976, Kleppner pioneered the field of Rydberg atoms, a highly excited atom that shares the simple properties that characterize hydrogen. Kleppner showed that these states could be excited by a tunable laser and easily detected with field ionization. He then mapped out their response in high electric and magnetic fields, which he used to provide new physical insights into the connections between quantum mechanics and classical chaos.
In 1989, his research into atomic energy levels, under conditions where the corresponding classical motion is chaotic, mapped out the positions of thousands of quantum levels as a function of laser frequency and applied field using high-resolution laser spectroscopy. His observations gave new physical insight into the implications of classical chaos on quantum systems.
“I see Dan as being the inventor of Rydberg atoms,” says Dan’s former student William Phillips PhD ’76, physicist at the Institute of Standards and Technology (NIST). “Of course, Rydberg atoms is something that nature gives you, but Dan was the one who really understood this was something that you could use to do really new and wonderful things.”
Such atoms have proved to be useful for studying the transition between quantum mechanics and classical chaos. Kleppner’s 1976 paper on Rydberg atoms’ strong interactions, long lifetimes, and sensitivity to external fields has led to current scientific research and multimillion-dollar startups interested in developing the promising Rydberg quantum computer; highly accurate measurements of electric and magnetic fields; and in quantum optics experiments.
“Largely due to Dan’s seminal roadmap, Rydberg atoms have become atomic physics’ E. coli for investigating the interaction of radiation with matter,” wrote Ketterle in his nomination for Kleppner’s 2017 APS Medal for Exceptional Achievement in Research. “They are being used by others in quests for experimental systems to realize Schrödinger’s cat, as well as for making a quantum computer.”
In 1981, Kleppner suggested in a theoretical paper the possibility of suppressing spontaneous emission with a cavity: excited atoms cannot decay when the cavity lacks the oscillatory modes to receive their emissions. This was followed by his demonstration of this effect, and launched the field of cavity quantum electrodynamics (cQED), the study of how light confined within a reflective cavity interacts with atoms or other particles. This field has led to the creation of new lasers and photonic devices.
“This work fundamentally changed the way physicists regard the process of spontaneous emission by showing that it is not a fixed property of a quantum state, but can be modified and controlled,” said Ketterle. “Current applications of these principles, which Dan terms ‘wrecking the vacuum,’ include thresholdless lasers and the construction of photonic bandgap materials in which light propagation is forbidden at certain frequencies.”
MIT-Harvard Center for Ultracold Atoms
In 2000, Kleppner secured National Science Foundation funding to co-found the Center for Ultracold Atoms (CUA), an MIT-Harvard collaboration that linked RLE with the Harvard Department of Physics to explore the physics of ultracold atoms and quantum gases. Kleppner served as its first director until 2006, and was a member of a group that included MIT professors Ketterle, Pritchard, Vladan Vuletic, Martin W. Zwierlein, Paola Cappellaro PhD ’06, and Isaac Chuang ’90.
“Many centers disappear after 10 to 20 years; sometimes their mission is fulfilled,” says Ketterle, the CUA director from 2006 to 2023. “But given the excitement and the rapid evolution in atomic physics, the CUA is a super-active center brimming with excitement, and we just recently got renewed. That’s partially due to the efforts of Dan. He created the tradition of atomic physics at MIT. We are one of the best atomic physics groups in the world. And we are really a family.”
Boost-phase intercept report
Kleppner co-authored a highly influential 2003 report that examined the technical feasibility of boost-phase intercept, a concept central to President George H.W. Bush’s proposed controversial Strategic Defense Initiative (SDI), nicknamed "Star Wars,” which purportedly would render nuclear weapons obsolete. The focus of the APS Study on Boost-Phase Intercept for National Missile Defense, published as a special supplement to Reviews of Modern Physics, was on the physics and engineering challenges of intercepting a missile during its boost phase.
“This was a subject on which I had no technical background at all,” Kleppner recalled, so he expressed gratitude for the skills of co-chair Fred Lamb of the University of Illinois. “But the APS [American Physical Society] felt that it was important to have information for the public … and no one knew anything about it. It was the point in my life where I could do that. And I feel that you have an obligation when the need arises and you can do it, to do that.”
The result? “Technically, it really would not succeed, except in very limited circumstances,” Kleppner said. Added Pritchard, “It vastly changed the path of the nation.”
“He was the perfect person to chair the committee,” says Ketterle. “He excelled in being neutral and unbiased, and to create a no-nonsense report. I think the APS was very proud of this report. It shows how physicists analyze something which was at that moment of immense political and societal importance. This report helped to understand what laser weapons cannot do and what they can do. The fact that (SDI) eventually, slowly, disappeared, the report may have contributed to that.”
Dedicated educator
Kleppner trained generations of physicists, including as advisor to 23 PhD students who have gone on to attain positions in major universities and achieve major scientific awards.
He was awarded the Oersted Medal of the American Association of Physics Teachers in 1997, and earned the Institute’s prestigious 1995-1996 James R. Killian, Jr. Faculty Achievement Award for his service to MIT and society on behalf of atomic physics. “He has given generously of his time and effort to the formation of national science policy, and he has served the Institute with distinction as teacher, administrator and counselor,” the Killian committee wrote.
Kleppner and Ramsey wrote the widely used text “Quick Calculus” in 1972 — the third edition of the book was updated in 2022 edition with MIT Department of Physics’ Peter Dourmashkin. With Robert J. Kolenkow, Kleppner also wrote “An Introduction to Mechanics” in 1973, and its second edition in 2013. Physics department head Deepto Chakrabarty ’88 called it “a masterpiece:” “It has formed the foundation of our freshman 8.012 course for potential physics majors for over 50 years and has provided a deep, elegant, and mathematically sophisticated introduction to classical mechanics of physics majors across the U.S. It was my own introduction to serious physics as an MIT freshman in 1984.”
Recently, while Kleppner was being wheeled into surgery, one of the medical personnel noticed that his patient was the author of that book and blurted out, “Oh my God, I still am wondering about one of those problems that I found so difficult,” recalls his wife, Bea, laughing.
Kleppner called his method of teaching “an engagement with the students and with the subject.” He said that his role model for teaching was his wife, who taught psychology at Beaver Country Day High School. “Fortunately, at MIT, the students are so great. There’s nothing tough about teaching here, except trying to stay ahead of the students.”
He leaves a legacy of grateful physicists impacted by his generous teaching style.
“I’ve always felt that I’ve just been incredibly lucky to be part of Dan’s group,” says Phillips, who was at Princeton when his research into magnetic resonance caught Kleppner’s attention, and invited him to MIT. “Dan extended this idea to putting this hydrogen maser in a much higher magnetic field. Not that many people are trained by somebody like Dan Kleppner in the art of precision measurement.”
Kleppner also gifted Phillips an apparatus he built for his thesis, which shaved years off the laser cooling experiments that led to Phillips’ Nobel.
Ketterle credited Kleppner’s mentorship for his career at MIT. “He was an older, experienced person who believed in me. He had more trust in me than I had initially myself. I felt whenever I was at a crossroads, I could go to Dan and ask him for advice. When I gave him a paper to edit … there was red ink all over it, but he was absolutely right on almost everything.’”
In 2003, Kleppner was dismayed at the statistic that over 60 percent of middle and high school teachers teaching physics have no background in the subject. He started the CUA’s Teaching Opportunities in Physical Science summer program with his former postdoc Ted Ducas to train physics majors to prepare and teach physics material to middle and high school students. In its 14-year run, they worked with 112 students.
According to Ducas, one survey “indicates over 60 percent of our undergraduates have gone into, or plan to go into, pre-college teaching — a higher percentage than expected, because physics majors have so many other career opportunities often paying significantly more. The potential positive impact of that number of highly qualified and motivated teachers is dramatic.”
Kleppner also partnered with Japanese mathematician Heisuke Hironaka on the mentoring program Japanese Association for Mathematical Sciences (JAMS), which connected American college science students with their Japanese counterparts. “His interest in ensuring that future generations also see the value of international communities was reflected in JAMS,” says Sofie Kleppner.
Recognitions and public service
Kleppner was promoted to professor in 1974 and headed the physics department’s Division of Atomic, Plasma and Condensed Matter Physics from 1976 to 1979. He was named the Lester Wolfe Professor of Physics in 1985.
Active in the interface between physics and public policy, he served on more than 30 committees. For the APS, he was on the Panel on Public Affairs (POPA), chaired the Physics Planning Committee and the Division of Atomic, Molecular and Optical Physics, and contributed to a study on the growth and mentorship of young physics professors. He chaired a report for the National Academy of Sciences on atomic physics that he presented on various congressional committees, served on the National Research Council's Physics Survey Committee, and was chair of the International Union of Pure and Applied Physics’ Commission on Atomic and Molecular Physics. At MIT, he was also an ombuds of the Physics Department.
Kleppner was a fellow of the American Academy of Arts and Sciences, American Association for the Advancement of Science, OSA (now Optica), French Academy of Sciences, and the American Philosophical Society; a member of the National Academy of Sciences; and a Phi Beta Kappa lecturer.
His interest in literature at Williams bloomed into a secondary career as a writer, including decades of writing witty and insightful, yet accessible, pieces for Physics Today, including his “Reference Frame” columns on physics history and policy.
Kleppner was a recipient of many awards, including the prestigious Wolf Prize in 2005 “for groundbreaking work in atomic physics of hydrogenic systems, including research on the hydrogen maser, Rydberg atoms, and Bose-Einstein condensation.” Other accolades include a 2014 Benjamin Franklin Medal and a 2006 National Medal of Science, presented by U.S. President George W. Bush. He also received the Frederic Ives Medal (2007), the William F. Meggers Award (1991), the Lilienfeld Prize (1991), and the Davisson-Germer Prize (1986).
His articles, congressional testimony, and advocating on behalf of physicists around the world at one point inspired his Physics Planning Committee colleagues to present him with a Little League trophy of a golden baseball player, with the inscription “Dan Kleppner — Who Went to Bat for Atomic Physics.”
Kleppner said that he was inspired by his mentor, Ramsey, to get involved in the scientific community. “It’s a privilege to be a scientist in this country,” said Kleppner. “And I think that one has some obligation to pay for the privilege, when you can.”
He wrote, “Any scenario for a decent future of our nation and the world must include a reasonable component of science that is devoted to the search for new knowledge. We cannot afford to abandon this vision under a barrage of criticism, no matter how eloquent or powerful the critics.”
Family and retired life
Kleppner met his future wife, Beatrice Spencer, in 1954 on the USS United States, when both were England-bound and in their second year of studying at Cambridge. They began as friends, and eventually married in 1958, in Ipswich, Massachusetts. They raised their three children, Sofie, Paul, and Andrew, at their home in Belmont, Massachusetts, and their vacation home in Vermont.
Kleppner’s family described him as an optimist who didn’t believe in lying, worrying, or unethical behavior. He and Bea generously invited into their home anyone in need. “When we were growing up, we had the international community in our house,” recalls Sofie. “He was just a tremendously generous person. At my father’s 80th birthday celebration at MIT, there were three hours of five-minute reminiscences. It was really moving to hear the number of people who felt that just having the open door at my parents’ house meant the difference to them as they went through difficult times.”
In his retirement, Kleppner continued with his woodworking projects, including building beds, lamps, cabinets, a beautiful spiral staircase, a cradle curved like the hull of a boat, and bookcases featuring super ellipses, a closed curve that blends elements of an ellipse and a rectangle.
“I enjoy designing,” he said in one video. “It’s the same instinct for making things work in experimental physics. It’s lovely to make a piece of apparatus that starts functioning, and even if the experiment doesn’t do what you want it to do. There’s always a lot of jubilation when the apparatus is first turned on and first works.”
His last article for Physics Today was in 2020. In his later years, he kept in touch with his colleagues, swapping book ideas with Ketterle’s wife, Michele Plott, and, since the Covid-19 pandemic, maintained regular Zoom meetings with a group of his former students, hosted by Mike Kash; and another, what they called “The Famous Physicists,” that included Phillips and their Brazilian colleague Vanderlei Bagnato.
“In recent years, I would still go to Dan for advice about difficult questions,” says Phillips, “sometimes about physics, sometimes just about life and public policy, because maybe I always felt that if there was anything you wanted done in which physics or science was part of the question that Dan would be the best person to do it.”
His family says that Kleppner suddenly fell ill at a Father’s Day dinner. According to his wife, his last words before being rushed to the hospital were a toast to his grandson, who recently graduated high school: “To Darwin and all youth who have new and exciting ideas.”
Says Bea, “He always said that you have to be optimistic to be a scientist, because you have to be patient. Things don’t work out and they’re fiddly, and there are lots of things that go wrong. His last words were ones that make you feel there’s hope for the future.”
Five MIT faculty elected to the National Academy of Sciences for 2025Rodney Brooks, Parag Pathak, Scott Sheffield, Benjamin Weiss, Yukiko Yamashita, and 13 MIT alumni are recognized by their peers for their outstanding contributions to research.The National Academy of Sciences (NAS) has elected 120 members and 30 international members, including five MIT faculty members and 13 MIT alumni. Professors Rodney Brooks, Parag Pathak, Scott Sheffield, Benjamin Weiss, and Yukiko Yamashita were elected in recognition of their “distinguished and continuing achievements in original research.” Membership to the National Academy of Sciences is one of the highest honors a scientist can receive in their career.
Elected MIT alumni include: David Altshuler ’86, Rafael Camerini-Otero ’66, Kathleen Collins PhD ’92, George Daley PhD ’89, Scott Doney PhD ’91, John Doyle PhD ’91, Jonathan Ellman ’84, Shanhui Fan PhD ’97, Julia Greer ’97, Greg Lemke ’78, Stanley Perlman PhD ’72, David Reichman PhD ’97, and Risa Wechsler ’96.
Those elected this year bring the total number of active members to 2,662, with 556 international members. The NAS is a private, nonprofit institution that was established under a congressional charter signed by President Abraham Lincoln in 1863. It recognizes achievement in science by election to membership, and — with the National Academy of Engineering and the National Academy of Medicine — provides science, engineering, and health policy advice to the federal government and other organizations.
Rodney Brooks
Rodney A. Brooks is the Panasonic Professor of Robotics Emeritus at MIT and the chief technical officer and co-founder of Robust AI. Previously, he was founder, chair, and CTO of Rethink Robotics and founder and CTO of iRobot Corp. He is also the former director of the MIT Artificial Intelligence Laboratory and the MIT Computer Science and Artificial Intelligence Laboratory. Brooks received degrees in pure mathematics from the Flinders University of South Australia and a PhD in computer science from Stanford University in 1981. He held research positions at Carnegie Mellon University and MIT, and a faculty position at Stanford before joining the faculty of MIT in 1984.
Brooks’ research is concerned with both the engineering of intelligent robots to operate in unstructured environments, and with understanding human intelligence through building humanoid robots. He has published papers and books in model-based computer vision, path planning, uncertainty analysis, robot assembly, active vision, autonomous robots, micro-robots, micro-actuators, planetary exploration, representation, artificial life, humanoid robots, and compiler design.
Brooks is a member of the National Academy of Engineering, a founding fellow of the Association for the Advancement of Artificial Intelligence, a fellow of the American Academy of Arts and Sciences, the American Association for the Advancement of Science, the Association for Computing Machinery, a foreign fellow of The Australian Academy of Technological Sciences and Engineering, and a corresponding member of the Australian Academy of Science. He won the Computers and Thought Award at the 1991 International Joint Conference on Artificial Intelligence, and the IEEE Founders Medal in 2023.
Parag Pathak
Parag Pathak is the Class of 1922 Professor of Economics and a founder and director of MIT’s Blueprint Labs. He joined the MIT faculty in 2008 after completing his PhD in business economics and his master’s and bachelor’s degrees in applied mathematics, all at Harvard University.
Pathak is best known for his work on market design and education. His research has informed student placement and school choice mechanisms across the United States, including in Boston, New York City, Chicago, and Washington, and his recent work applies ideas from market design to the rationing of vital medical resources. Pathak has also authored leading studies on school quality, charter schools, and affirmative action. In urban economics, he has measured the effects of foreclosures on house prices and how the housing market reacted to the end of rent control in Cambridge, Massachusetts.
Pathak’s research on market design was recognized with the 2018 John Bates Clark Medal, given by the American Economic Association to the economist under 40 whose work is judged to have made the most significant contribution to the field. He is a fellow of the American Academy of Arts and Sciences, the Econometric Society, and the Society for the Advancement of Economic Theory. Pathak is also the founding co-director of the market design working group at the National Bureau of Economic Research, and a co-founder of Avela Education.
Scott Sheffield
Scott Sheffield, Leighton Family Professor of Mathematics, joined the MIT faculty in 2008 after a faculty appointment at the Courant Institute at New York University. He received a PhD in mathematics from Stanford University in 2003 under the supervision of Amir Dembo, and completed BA and MA degrees in mathematics from Harvard University in 1998.
Sheffield is a probability theorist, working on geometrical questions that arise in such areas as statistical physics, game theory, and metric spaces, as well as long-standing problems in percolation theory and the theory of random surfaces.
In 2017, Sheffield received the Clay Research Award with Jason Miller, “in recognition of their groundbreaking and conceptually novel work on the geometry of Gaussian free field and its application to the solution of open problems in the theory of two-dimensional random structures.” In 2023, he received the Leonard Eisenbud Prize with Jason Miller “for works on random two-dimensional geometries, and in particular on Liouville Quantum Gravity.” Later in 2023, Sheffield received the Frontiers of Science Award with Jason Miller for the paper “Liouville quantum gravity and the Brownian map I: the QLE(8/3,0) metric.” Sheffield is a fellow of the American Academy of Arts and Science.
Benjamin Weiss
Benjamin Weiss is the Robert R. Schrock Professor of Earth and Planetary Sciences. He studied physics at Amherst College as an undergraduate and went on to study planetary science and geology at Caltech, where he earned a master’s degree in 2001 and PhD in 2003. Weiss’ doctoral dissertation on Martian meteorite ALH 84001 revealed records of the ancient Martian climate and magnetic field, and provided evidence some meteorites could transfer materials from Mars to Earth without heat-sterilization. Weiss became a member of the Department of Earth, Atmospheric and Planetary Sciences faculty in 2004 and is currently chair of the Program in Planetary Science.
A specialist in magnetometry, Weiss seeks to understand the formation and evolution of the Earth, terrestrial planets, and small solar system bodies through laboratory analysis, spacecraft observations, and fieldwork. He is known for key insights into the history of our solar system, including discoveries about the early nebular magnetic field, the moon’s long-lived core dynamo, and asteroids that generated core dynamos in the past. In addition to leadership roles on current, active NASA missions — as deputy principal investigator for Psyche, and co-investigator for Mars Perseverance and Europa Clipper — Weiss has also been part of science teams for the SpaceIL Beresheet, JAXA Hayabusa 2, and ESA Rosetta spacecraft.
As principal investigator of the MIT Planetary Magnetism Laboratory, Weiss works to develop high-sensitivity, high-resolution techniques in magnetic microscopy to image the magnetic fields embedded in rock samples collected from meteorites, the lunar surface, and sites around the Earth. Studying these magnetic signatures can help answer questions about the conditions of the early solar system, past climates on Earth and Mars, and factors that promote habitability.
Yukiko Yamashita
Yukiko Yamashita is a professor of biology at MIT, a core member of the Whitehead Institute for Biomedical Research, and an investigator at the Howard Hughes Medical Institute (HHMI). Yamashita earned her BS in biology in 1994 and her PhD in biophysics in 1999 from Kyoto University. From 2001 to 2006, she did postdoctoral research at Stanford University. She was appointed to the University of Michigan faculty in 2007 and was named an HHMI Investigator in 2014. She became a member of the Whitehead Institute and a professor of biology at MIT in 2020.
Yukiko Yamashita studies two fundamental aspects of multicellular organisms: how cell fates are diversified via asymmetric cell division, and how genetic information is transmitted through generations via the germline.
Two remarkable feats of multicellular organisms are generation of many distinct cell types via asymmetric cell division and transmission of the germline genome to the next generation, essentially in eternity. Studying these processes using the Drosophila male germline as a model system has led us to venture into new areas of study, such as functions of satellite DNA, “genomic junk,” and how they might be involved in speciation.
Yamashita is a member of the American Academy of Arts and Sciences, a fellow of the American Society for Cell Biology, and the winner of the Tsuneko and Reiji Okazaki Award in 2016. She was named a MacArthur Fellow in 2011.
New AI system uncovers hidden cell subtypes, boosts precision medicineCellLENS reveals hidden patterns in cell behavior within tissues, offering deeper insights into cell heterogeneity — vital for advancing cancer immunotherapy.In order to produce effective targeted therapies for cancer, scientists need to isolate the genetic and phenotypic characteristics of cancer cells, both within and across different tumors, because those differences impact how tumors respond to treatment.
Part of this work requires a deep understanding of the RNA or protein molecules each cancer cell expresses, where it is located in the tumor, and what it looks like under a microscope.
Traditionally, scientists have looked at one or more of these aspects separately, but now a new deep learning AI tool, CellLENS (Cell Local Environment and Neighborhood Scan), fuses all three domains together, using a combination of convolutional neural networks and graph neural networks to build a comprehensive digital profile for every single cell. This allows the system to group cells with similar biology — effectively separating even those that appear very similar in isolation, but behave differently depending on their surroundings.
The study, published recently in Nature Immunology, details the results of a collaboration between researchers from MIT, Harvard Medical School, Yale University, Stanford University, and University of Pennsylvania — an effort led by Bokai Zhu, an MIT postdoc and member of the Broad Institute of MIT and Harvard and the Ragon Institute of MGH, MIT, and Harvard.
Zhu explains the impact of this new tool: “Initially we would say, oh, I found a cell. This is called a T cell. Using the same dataset, by applying CellLENS, now I can say this is a T cell, and it is currently attacking a specific tumor boundary in a patient.
“I can use existing information to better define what a cell is, what is the subpopulation of that cell, what that cell is doing, and what is the potential functional readout of that cell. This method may be used to identify a new biomarker, which provides specific and detailed information about diseased cells, allowing for more targeted therapy development.”
This is a critical advance because current methodologies often miss critical molecular or contextual information — for example, immunotherapies may target cells that only exist at the boundary of a tumor, limiting efficacy. By using deep learning, the researchers can detect many different layers of information with CellLENS, including morphology and where the cell is spatially in a tissue.
When applied to samples from healthy tissue and several types of cancer, including lymphoma and liver cancer, CellLENS uncovered rare immune cell subtypes and revealed how their activity and location relate to disease processes — such as tumor infiltration or immune suppression.
These discoveries could help scientists better understand how the immune system interacts with tumors and pave the way for more precise cancer diagnostics and immunotherapies.
“I’m extremely excited by the potential of new AI tools, like CellLENS, to help us more holistically understand aberrant cellular behaviors within tissues,” says co-author Alex K. Shalek, the director of the Institute for Medical Engineering and Science (IMES), the J. W. Kieckhefer Professor in IMES and Chemistry, and an extramural member of the Koch Institute for Integrative Cancer Research at MIT, as well as an Institute member of the Broad Institute and a member of the Ragon Institute. “We can now measure a tremendous amount of information about individual cells and their tissue contexts with cutting-edge, multi-omic assays. Effectively leveraging that data to nominate new therapeutic leads is a critical step in developing improved interventions. When coupled with the right input data and careful downsteam validations, such tools promise to accelerate our ability to positively impact human health and wellness.”
How an MIT professor introduced hundreds of thousands of students to neuroscienceWith an emphasis on approachability, Professor Mark Bear’s “Neuroscience: Exploring the Brain” enters its fourth decade as the text of undergraduate neuroscience classes worldwide.From the very beginning, MIT Professor Mark Bear’s philosophy for the textbook “Neuroscience: Exploring the Brain” was to provide an accessible and exciting introduction to the field while still giving undergraduates a rigorous scientific foundation. In the 30 years since its first print printing in 1995, the treasured 975-page tome has gone on to become the leading introductory neuroscience textbook, reaching hundreds of thousands of students at hundreds of universities around the world.
“We strive to present the hard science without making the science hard,” says Bear, the Picower Professor in The Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences at MIT. The fifth edition of the textbook is out today from the publisher Jones & Bartlett Learning.
Bear says the book is conceived, written, and illustrated to instill students with the state of knowledge in the field without assuming prior sophistication in science. When he first started writing it in the late 1980s — in an effort soon joined by his co-authors and former Brown University colleagues Barry Connors and Michael Paradiso — there simply were no undergraduate neuroscience textbooks. Up until then, first as a graduate teaching assistant and then as a young professor, Bear taught Brown’s pioneering introductory neuroscience class with a spiral-bound stack of photocopied studies and other scrounged readings.
Don’t overwhelm
Because universities were only beginning to launch neuroscience classes and majors at the time, Bear recalls that it was hard to find a publisher. The demand was just too uncertain. With an unsure market, Bear says, the original publisher, Williams & Wilkins, wanted to keep costs down by printing only in black and white. But Bear and his co-authors insisted on color. Consistent with their philosophy for the book, they wanted students, even before they began reading, to be able to learn from attractive, high-quality illustrations.
“Rather than those that speak a thousand words, we wanted to create illustrations that each make a single point.” Bear says. “We don’t want to overwhelm students with a bunch of detail. If people want to know what’s in our book, just look at the pictures.”
Indeed, if the book had struck students as impenetrable and dull, Bear says, he and his co-authors would have squandered the advantage they had in presenting their subject: the inherently fascinating and exciting brain.
“Most good scientists are extremely enthusiastic about the science. It exciting. It’s fun. It turns them on,” Bear says. “We try to communicate the joy. We’re so lucky because the object of our affection is the brain.”
To help bring that joy and excitement across, another signature of the book throughout its 30-year-history has been the way it presents the process of discovery alongside the discoveries themselves, Bear says. While it’s instructive to provide students with the experimental evidence that supports the concepts they are learning, it would bog down the text to delineate the details of every experiment. Instead, Bear, Connors, and Paradiso have chosen to highlight the process of discovery via one-page guest essays by prominent neuroscientists who share their discovery stories personally. Each edition has featured about 25 such “Path of Discovery” essays, so more than 100 scientists have participated, including several Nobel Prize winners, such as the Picower Institute’s founding director, Susumu Tonegawa.
The new edition includes Path of Discovery essays by current Picower Institute Director Li-Huei Tsai and Picower Institute colleague Emery N. Brown. Tsai recounts her discovery that sensory stimulation of 40Hz rhythms in the brain can trigger a health-promoting response among many different cell types. Brown writes about how various biological cycles and rhythms in the brain and body, such as the circadian rhythms and brain waves, help organize our daily lives.
Immense impact
Jones & Bartlett reports that more than 470 colleges and universities in 48 U.S. states and the District of Columbia have used the fourth edition of the book. Various editions have also been translated into seven other languages, including Chinese, French, Portuguese, and Spanish. There are hundreds of reviews on Amazon.com with an average around 4.6 stars. One reviewer wrote about the fourth edition: “I never knew it was possible to love a textbook before!”
The reviews sometimes go beyond mere internet postings. Once, after Bear received an award in Brazil, he found himself swarmed at the podium by scores of students eager for him to sign their copies of the book. And earlier this year, when Bear needed surgery, the anesthesiologist was excited to meet him.
“The anesthesiologist was like, ‘Are you the Mark Bear who wrote the textbook?,’ and she was so excited, because she said, ‘This book changed my life,’” Bear recalls. “After I recovered, she showed up in the ICU for me to sign it. All of us authors have had this experience that there are people whose lives we’ve touched.”
While Bear is proud that so many students have benefited from the book, he also notes that teaching and textbook writing have benefited him as a scientist. They have helped him present his research more clearly, he says, and have given him a broad perspective on what’s truly important in the field.
“Experience teaching will influence the impact of your own science by making you more able to effectively communicate it.” Bear says. “And the teacher has a difficult job of surveying a field and saying, ‘I’ve got to capture the important advances and set aside the less-important stuff.’ It gives you a perspective that helps you to discriminate between more-important and less-important problems in your own research.”
Over the course of 30 years via their carefully crafted book, Bear, Connors, and Paradiso have lent that perspective to generations of students. And the next generation will start with today’s publishing of the new edition.
Gift from Dick Larson establishes Distinguished Professorship in Data, Systems, and SocietySasha Rakhlin, a professor in IDSS and brain and cognitive sciences, has been named the inaugural holder of the new professorship.The MIT Institute for Data, Systems, and Society (IDSS) announced the creation of a new endowed chair made possible by the generosity of IDSS professor post-tenure and “MIT lifer” Richard “Dick” Larson. Effective July 1, the fund provides a full professorship for senior IDSS faculty: the Distinguished Professorship in Data, Systems, and Society.
“As a faculty member, MIT has not only accepted but embraced my several mid-career changes of direction,” says Larson. “I have called five different academic departments my home, starting with Electrical Engineering (that is what it was called in the 1960s) and now finalized with the interdepartmental, interdisciplinary IDSS — Institute for Data, Systems and Society. Those beautiful three words — data, systems, society — they represent my energy and commitment over the second half of my career. My gifted chair is an effort to keep alive those three words, with others following me doing research, teaching and mentoring centered around data, systems, society.”
Larson’s career has focused his operations research and systems expertise on a wide variety of problems, in both public and private sectors. His contributions span the fields of urban service systems (especially emergency response systems), disaster planning, pandemics, queueing, logistics, technology-enabled education, smart-energy houses, and workforce planning. His latest book, “Model Thinking for Everyday Life,” draws on decades of experience as a champion of STEM education at MIT and beyond, such as his leadership of MIT BLOSSOMS.
“Dick Larson has been making an impact at MIT for over half a century,” says IDSS Director Fotini Christia, the Ford International Professor in Political Science. “This gift extends his already considerable legacy and ensures his impact will continue to be felt for many years to come.”
Christia is pleased that IDSS and brain and cognitive science professor Alexander “Sasha” Rakhlin is the inaugural holder of the new professorship. The selection recognizes Rakhlin’s distinguished scholarly record, dedicated service to IDSS, excellence in teaching, and contributions to research in statistics and computation.
“Sasha’s analysis of neural network complexity, and his work developing tools for online prediction, are perfect examples of research which builds bridges across disciplines, and also connects different departments and units at MIT,” says Michale Fee, the Glen V. and Phyllis F. Dorflinger Professor of Neuroscience, and head of the Department of Brain and Cognitive Sciences. “It’s wonderful to see Sasha’s contributions recognized in this way, and I’m grateful to Dick Larson for supporting this vision.”
Rakhlin’s research is in machine learning, with an emphasis on statistics and computation. He is interested in formalizing the process of learning, in analyzing learning models, and in deriving and implementing emerging learning methods. A significant thrust of his research is in developing theoretical and algorithmic tools for online prediction, a learning framework where data arrives in a sequential fashion.
“I am honored to be the inaugural holder of the Distinguished Professorship in Data, Systems, and Society,” says Rakhlin. “Professor Larson’s commitment to education and service to MIT both serve as models to follow.”
MIT chemists boost the efficiency of a key enzyme in photosynthesisThe enzyme, known as rubisco, helps plants and photosynthetic bacteria incorporate carbon dioxide into sugars.During photosynthesis, an enzyme called rubisco catalyzes a key reaction — the incorporation of carbon dioxide into organic compounds to create sugars. However, rubisco, which is believed to be the most abundant enzyme on Earth, is very inefficient compared to the other enzymes involved in photosynthesis.
MIT chemists have now shown that they can greatly enhance a version of rubisco found in bacteria from a low-oxygen environment. Using a process known as directed evolution, they identified mutations that could boost rubisco’s catalytic efficiency by up to 25 percent.
The researchers now plan to apply their technique to forms of rubisco that could be used in plants to help boost their rates of photosynthesis, which could potentially improve crop yields.
“This is, I think, a compelling demonstration of successful improvement of a rubisco’s enzymatic properties, holding out a lot of hope for engineering other forms of rubisco,” says Matthew Shoulders, the Class of 1942 Professor of Chemistry at MIT.
Shoulders and Robert Wilson, a research scientist in the Department of Chemistry, are the senior authors of the new study, which appears this week in the Proceedings of the National Academy of Sciences. MIT graduate student Julie McDonald is the paper’s lead author.
Evolution of efficiency
When plants or photosynthetic bacteria absorb energy from the sun, they first convert it into energy-storing molecules such as ATP. In the next phase of photosynthesis, cells use that energy to transform a molecule known as ribulose bisphosphate into glucose, which requires several additional reactions. Rubisco catalyzes the first of those reactions, known as carboxylation. During that reaction, carbon from CO2 is added to ribulose bisphosphate.
Compared to the other enzymes involved in photosynthesis, rubisco is very slow, catalyzing only one to 10 reactions per second. Additionally, rubisco can also interact with oxygen, leading to a competing reaction that incorporates oxygen instead of carbon — a process that wastes some of the energy absorbed from sunlight.
“For protein engineers, that’s a really attractive set of problems because those traits seem like things that you could hopefully make better by making changes to the enzyme’s amino acid sequence,” McDonald says.
Previous research has led to improvement in rubisco’s stability and solubility, which resulted in small gains in enzyme efficiency. Most of those studies used directed evolution — a technique in which a naturally occurring protein is randomly mutated and then screened for the emergence of new, desirable features.
This process is usually done using error-prone PCR, a technique that first generates mutations in vitro (outside of the cell), typically introducing only one or two mutations in the target gene. In past studies on rubisco, this library of mutations was then introduced into bacteria that grow at a rate relative to rubisco activity. Limitations in error-prone PCR and in the efficiency of introducing new genes restrict the total number of mutations that can be generated and screened using this approach. Manual mutagenesis and selection steps also add more time to the process over multiple rounds of evolution.
The MIT team instead used a newer mutagenesis technique that the Shoulders Lab previously developed, called MutaT7. This technique allows the researchers to perform both mutagenesis and screening in living cells, which dramatically speeds up the process. Their technique also enables them to mutate the target gene at a higher rate.
“Our continuous directed evolution technique allows you to look at a lot more mutations in the enzyme than has been done in the past,” McDonald says.
Better rubisco
For this study, the researchers began with a version of rubisco, isolated from a family of semi-anaerobic bacteria known as Gallionellaceae, that is one of the fastest rubisco found in nature. During the directed evolution experiments, which were conducted in E. coli, the researchers kept the microbes in an environment with atmospheric levels of oxygen, creating evolutionary pressure to adapt to oxygen.
After six rounds of directed evolution, the researchers identified three different mutations that improved the rubisco’s resistance to oxygen. Each of these mutations are located near the enzyme’s active site (where it performs carboxylation or oxygenation). The researchers believe that these mutations improve the enzyme’s ability to preferentially interact with carbon dioxide over oxygen, which leads to an overall increase in carboxylation efficiency.
“The underlying question here is: Can you alter and improve the kinetic properties of rubisco to operate better in environments where you want it to operate better?” Shoulders says. “What changed through the directed evolution process was that rubisco began to like to react with oxygen less. That allows this rubisco to function well in an oxygen-rich environment, where normally it would constantly get distracted and react with oxygen, which you don’t want it to do.”
In ongoing work, the researchers are applying this approach to other forms of rubisco, including rubisco from plants. Plants are believed to lose about 30 percent of the energy from the sunlight they absorb through a process called photorespiration, which occurs when rubisco acts on oxygen instead of carbon dioxide.
“This really opens the door to a lot of exciting new research, and it’s a step beyond the types of engineering that have dominated rubisco engineering in the past,” Wilson says. “There are definite benefits to agricultural productivity that could be leveraged through a better rubisco.”
The research was funded, in part, by the National Science Foundation, the National Institutes of Health, an Abdul Latif Jameel Water and Food Systems Lab Grand Challenge grant, and a Martin Family Society Fellowship for Sustainability.
Study: Babies’ poor vision may help organize visual brain pathwaysMIT researchers found that low-quality visual input early in life may contribute to the development of key pathways in the brain’s visual system.Incoming information from the retina is channeled into two pathways in the brain’s visual system: one that’s responsible for processing color and fine spatial detail, and another that’s involved in spatial localization and detecting high temporal frequencies. A new study from MIT provides an account for how these two pathways may be shaped by developmental factors.
Newborns typically have poor visual acuity and poor color vision because their retinal cone cells are not well-developed at birth. This means that early in life, they are seeing blurry, color-reduced imagery. The MIT team proposes that such blurry, color-limited vision may result in some brain cells specializing in low spatial frequencies and low color tuning, corresponding to the so-called magnocellular system. Later, with improved vision, cells may tune to finer details and richer color, consistent with the other pathway, known as the parvocellular system.
To test their hypothesis, the researchers trained computational models of vision on a trajectory of input similar to what human babies receive early in life — low-quality images early on, followed by full-color, sharper images later. They found that these models developed processing units with receptive fields exhibiting some similarity to the division of magnocellular and parvocellular pathways in the human visual system. Vision models trained on only high-quality images did not develop such distinct characteristics.
“The findings potentially suggest a mechanistic account of the emergence of the parvo/magno distinction, which is one of the key organizing principles of the visual pathway in the mammalian brain,” says Pawan Sinha, an MIT professor of brain and cognitive sciences and the senior author of the study.
MIT postdocs Marin Vogelsang and Lukas Vogelsang are the lead authors of the study, which appears today in the journal Communications Biology. Sidney Diamond, an MIT research affiliate, and Gordon Pipa, a professor of neuroinformatics at the University of Osnabrueck, are also authors of the paper.
Sensory input
The idea that low-quality visual input might be beneficial for development grew out of studies of children who were born blind but later had their sight restored. An effort from Sinha’s laboratory, Project Prakash, has screened and treated thousands of children in India, where reversible forms of vision loss such as cataracts are relatively common. After their sight is restored, many of these children volunteer to participate in studies in which Sinha and his colleagues track their visual development.
In one of these studies, the researchers found that children who had cataracts removed exhibited a marked drop in object-recognition performance when the children were presented with black and white images, compared to colored ones. Those findings led the researchers to hypothesize that reduced color input characteristic of early typical development, far from being a hindrance, allows the brain to learn to recognize objects even in images that have impoverished or shifted colors.
“Denying access to rich color at the outset seems to be a powerful strategy to build in resilience to color changes and make the system more robust against color loss in images,” Sinha says.
In that study, the researchers also found that when computational models of vision were initially trained on grayscale images, followed by color images, their ability to recognize objects was more robust than that of models trained only on color images. Similarly, another study from the lab found that models performed better when they were trained first on blurry images, followed by sharper images.
To build on those findings, the MIT team wanted to explore what might be the consequences of both of those features — color and visual acuity — being limited at the outset of development. They hypothesized that these limitations might contribute to the development of the magnocellular and parvocellular pathways.
In addition to being highly attuned to color, cells in the parvocellular pathway have small receptive fields, meaning that they receive input from more compact clusters of retinal ganglion cells. This helps them to process fine detail. Cells in the magnocellular pathway pool information across larger areas, allowing them to process more global spatial information.
To test their hypothesis that developmental progressions could contribute to the magno and parvo cell selectivities, the researchers trained models on two different sets of images. One model was presented with a standard dataset of images that are used to train models to categorize objects. The other dataset was designed to roughly mimic the input that the human visual system receives from birth. This “biomimetic” data consists of low-resolution, grayscale images in the first half of the training, followed by high-resolution, colorful images in the second half.
After the models were trained, the researchers analyzed the models’ processing units — nodes within the network that bear some resemblance to the clusters of cells that process visual information in the brain. They found that the models trained on the biomimetic data developed a distinct subset of units that are jointly responsive to low-color and low-spatial-frequency inputs, similar to the magnocellular pathway. Additionally, these biomimetic models exhibited groups of more heterogenous parvocellular-like units tuned predominantly to higher spatial frequencies or richer color signals. Such distinction did not emerge in the models trained on full color, high-resolution images from the start.
“This provides some support for the idea that the ‘correlation’ we see in the biological system could be a consequence of the types of inputs that are available at the same time in normal development,” Lukas Vogelsang says.
Object recognition
The researchers also performed additional tests to reveal what strategies the differently trained models were using for object recognition tasks. In one, they asked the models to categorize images of objects where the shape and texture did not match — for example, an animal with the shape of cat but the texture of an elephant.
This is a technique several researchers in the field have employed to determine which image attributes a model is using to categorize objects: the overall shape or the fine-grained textures. The MIT team found that models trained on biomimetic input were markedly more likely to use an object’s shape to make those decisions, just as humans usually do. Moreover, when the researchers systematically removed the magnocellular-like units from the models, the models quickly lost their tendency to use shape to make categorizations.
In another set of experiments, the researchers trained the models on videos instead of images, which introduces a temporal dimension. In addition to low spatial resolution and color sensitivity, the magnocellular pathway responds to high temporal frequencies, allowing it to quickly detect changes in the position of an object. When models were trained on biomimetic video input, the units most tuned to high temporal frequencies were indeed the ones that also exhibited magnocellular-like properties in the spatial domain.
Overall, the results support the idea that low-quality sensory input early in life may contribute to the organization of sensory processing pathways of the brain, the researchers say. The findings do not rule out innate specification of the magno and parvo pathways, but provide a proof of principle that visual experience over the course of development could also play a role.
“The general theme that seems to be emerging is that the developmental progression that we go through is very carefully structured in order to give us certain kinds of perceptual proficiencies, and it may also have consequences in terms of the very organization of the brain,” Sinha says.
The research was funded by the National Institutes of Health, the Simons Center for the Social Brain, the Japan Society for the Promotion of Science, and the Yamada Science Foundation.
New method combines imaging and sequencing to study gene function in intact tissueThe approach collects multiple types of imaging and sequencing data from the same cells, leading to new insights into mouse liver biology.Imagine that you want to know the plot of a movie, but you only have access to either the visuals or the sound. With visuals alone, you’ll miss all the dialogue. With sound alone, you will miss the action. Understanding our biology can be similar. Measuring one kind of data — such as which genes are being expressed — can be informative, but it only captures one facet of a multifaceted story. For many biological processes and disease mechanisms, the entire “plot” can’t be fully understood without combining data types.
However, capturing both the “visuals and sound” of biological data, such as gene expression and cell structure data, from the same cells requires researchers to develop new approaches. They also have to make sure that the data they capture accurately reflects what happens in living organisms, including how cells interact with each other and their environments.
Whitehead Institute for Biomedical Research and Harvard University researchers have taken on these challenges and developed Perturb-Multimodal (Perturb-Multi), a powerful new approach that simultaneously measures how genetic changes such as turning off individual genes affect both gene expression and cell structure in intact liver tissue. The method, described in Cell on June 12, aims to accelerate discovery of how genes control organ function and disease.
The research team, led by Whitehead Institute Member Jonathan Weissman and then-graduate student in his lab Reuben Saunders, along with Xiaowei Zhuang, the David B. Arnold Professor of Science at Harvard University, and then-postdoc in her lab Will Allen, created a system that can test hundreds of different genetic modifications within a single mouse liver while capturing multiple types of data from the same cells.
“Understanding how our organs work requires looking at many different aspects of cell biology at once,” Saunders says. “With Perturb-Multi, we can see how turning off specific genes changes not just what other genes are active, but also how proteins are distributed within cells, how cellular structures are organized, and where cells are located in the tissue. It’s like having multiple specialized microscopes all focused on the same experiment.”
“This approach accelerates discovery by both allowing us to test the functions of many different genes at once, and then for each gene, allowing us to measure many different functional outputs or cell properties at once — and we do that in intact tissue from animals,” says Zhuang, who is also a Howard Hughes Medical Institute (HHMI) investigator.
A more efficient approach to genetic studies
Traditional genetic studies in mice often turn off one gene and then observe what changes in that gene’s absence to learn about what the gene does. The researchers designed their approach to turn off hundreds of different genes across a single liver, while still only turning off one gene per cell — using what is known as a mosaic approach. This allowed them to study the roles of hundreds of individual genes at once in a single individual. The researchers then collected diverse types of data from cells across the same liver to get a full picture of the consequences of turning off the genes.
“Each cell serves as its own experiment, and because all the cells are in the same animal, we eliminate the variability that comes from comparing different mice,” Saunders says. “Every cell experiences the same physiological conditions, diet, and environment, making our comparisons much more precise.”
“The challenge we faced was that tissues, to perform their functions, rely on thousands of genes, expressed in many different cells, working together. Each gene, in turn, can control many aspects of a cell’s function. Testing these hundreds of genes in mice using current methods would be extremely slow and expensive — near impossible, in practice.” Allen says.
Revealing new biology through combined measurements
The team applied Perturb-Multi to study genetic controls of liver physiology and function. Their study led to discoveries in three important aspects of liver biology: fat accumulation in liver cells — a precursor to liver disease; stress responses; and hepatocyte zonation (how liver cells specialize, assuming different traits and functions, based on their location within the liver).
One striking finding emerged from studying genes that, when disrupted, cause fat accumulation in liver cells. The imaging data revealed that four different genes all led to similar fat droplet accumulation, but the sequencing data showed they did so through three completely different mechanisms.
“Without combining imaging and sequencing, we would have missed this complexity entirely,” Saunders says. “The imaging told us which genes affect fat accumulation, while the sequencing revealed whether this was due to increased fat production, cellular stress, or other pathways. This kind of mechanistic insight could be crucial for developing targeted therapies for fatty liver disease.”
The researchers also discovered new regulators of liver cell zonation. Unexpectedly, the newly discovered regulators include genes involved in modifying the extracellular matrix — the scaffolding between cells. “We found that cells can change their specialized functions without physically moving to a different zone,” Saunders says. “This suggests that liver cell identity is more flexible than previously thought.”
Technical innovation enables new science
Developing Perturb-Multi required solving several technical challenges. The team created new methods for preserving the content of interest in cells — RNA and proteins — during tissue processing, for collecting many types of imaging data and single-cell gene expression data from tissue samples that have been fixed with a preservative, and for integrating multiple types of data from the same cells.
“Overcoming the inherent complexity of biology in living animals required developing new tools that bridge multiple disciplines — including, in this case, genomics, imaging, and AI,” Allen says.
The two components of Perturb-Multi — the imaging and sequencing assays — together, applied to the same tissue, provide insights that are unattainable through either assay alone.
“Each component had to work perfectly while not interfering with the others,” says Weissman, who is also a professor of biology at MIT and an HHMI investigator. “The technical development took considerable effort, but the payoff is a system that can reveal biology we simply couldn’t see before.”
Expanding to new organs and other contexts
The researchers plan to expand Perturb-Multi to other organs, including the brain, and to study how genetic changes affect organ function under different conditions like disease states or dietary changes.
“We’re also excited about using the data we generate to train machine learning models,” adds Saunders. “With enough examples of how genetic changes affect cells, we could eventually predict the effects of mutations without having to test them experimentally — a ‘virtual cell’ that could accelerate both research and drug development.”
“Perturbation data are critical for training such AI models and the paucity of existing perturbation data represents a major hindrance in such ‘virtual cell’ efforts,” Zhuang says. “We hope Perturb-Multi will fill this gap by accelerating the collection of perturbation data.”
The approach is designed to be scalable, with the potential for genome-wide studies that test thousands of genes simultaneously. As sequencing and imaging technologies continue to improve, the researchers anticipate that Perturb-Multi will become even more powerful and accessible to the broader research community.
“Our goal is to keep scaling up. We plan to do genome-wide perturbations, study different physiological conditions, and look at different organs,” says Weissman. “That we can now collect so many types of data from so many cells, at speed, is going to be critical for building AI models like virtual cells, and I think it’s going to help us answer previously unsolvable questions about health and disease.”
Inspiring student growthProfessors Xiao Wang and Rodrigo Verdi are honored as “Committed to Caring.”Professors Xiao Wang and Rodrigo Verdi, both members of the 2023-25 Committed to Caring cohort, are aiding in the development of extraordinary researchers and contributing to a collaborative culture.
“Professor Xiao Wang's caring efforts have a profound impact on the lives of her students,” one of her advisees commended.
“Rodrigo's dedication to mentoring and his unwavering support have positively impacted every student in our group,” another student praised.
For MIT graduate students, the Committed to Caring program recognizes those who go above and beyond.
Xiao Wang: Enriching, stimulating, and empowering students
Xiao Wang is a core institute member of the Broad Institute of MIT and Harvard and an associate professor in the Department of Chemistry at MIT. She started her lab in 2019 to develop and apply new chemical, biophysical, and genomic tools to better understand tissue function and dysfunction at the molecular level.
Wang goes above and beyond to create a nurturing environment that fosters growth and supports her students' personal and academic development. She makes it a priority to ensure an intellectually stimulating environment, taking the time to discuss research interests, academic goals, and personal aspirations on a weekly basis.
In their nominations, her students emphasized that Wang understands the importance of mentorship, patiently explaining fundamental concepts, sharing insights from her own groundbreaking work, and providing her students with key scientific papers and resources to deepen their understanding of the field.
“Professor Wang encouraged me to think critically, ask challenging questions, and explore innovative approaches to further my research,” one of her students commented.
Beyond the lab, Wang nurtures a sense of community among her research team. Her regular lab meetings are highly valued by her students, where “fellow researchers presented … findings, exchanged ideas, and received constructive feedback.”
These meetings foster collaboration, enhance communication skills, and create a supportive environment where all lab members feel empowered to share their discoveries and insights.
Wang is a dedicated and compassionate educator, and is known for her unwavering commitment to the well-being and success of her students. Her advisees not only excel academically but they also develop resilience, confidence, and a sense of belonging.
A different student reflected that although they came from an organic chemistry background with few skills related to the chemical biology field, Wang recognized their enthusiasm and potential. She went out of her way to make sure they could have a smooth transition. “It is because of all her training and help that I came from knowing nothing about the field to being able to confidently call myself a chemical biologist,” the student acclaimed.
Her advisees communicate that Wang encourages them to present their work at conferences, workshops, and seminars. This helps boost the students’ confidence and establish connections within the scientific community.
“Her genuine care and dedication make her a cherished mentor and a source of inspiration for all who have the privilege to learn from her,” one of her mentees remarked.
Rodrigo Verdi: Committed and collaborative
Professor Rodrigo Verdi is the deputy dean of degree programs and teaching and learning at the MIT Sloan School of Management. Verdi’s research provides insights into the role of accounting information in corporate finance decisions and in capital markets behavior.
Professor Verdi has been active in the majority of the Sloan students’ research journeys. He makes sure to assist students even if he does not directly guide them. One student states that “although Rodrigo is not my primary advisor, he still goes above and beyond to provide feedback and assistance.”
Verdi believes that “an appetite for experimentation, the ability to handle failure, and managing the stress along the way” is the kind of support necessary for especially innovative research.
Another student recounts that they “cannot think of a single recent graduate since … [they] started the PhD program that did not have Rodrigo on their committee.” This demonstrates how much students value his guidance, and how much he cares about their success.
Since his arrival at MIT, he has shown a strong commitment to mentoring students. Despite his many responsibilities as an associate dean, Rodrigo remains highly accessible to students and eagerly engages with them.
Specifically, Verdi has interacted with more than 90 percent of recent graduates over the past 10 years, contributing significantly to the department’s strong track record in job placements. He has served on the dissertation committee for 18 students in the last 15 years, which represents nearly all of the students in the department.
A student remarked that “Rodrigo has been an exceptional advisor during my job market period, which is known for its high levels of stress.” He offered continuous encouragement and support, making himself available for discussions whenever the student faced challenges.
After each job market interview, Verdi and the student would debrief and discuss areas for improvement. His insights into the academic system, the significance of social skills and networking, and his valuable advice helped the student successfully get a faculty position.
Rodrigo’s mantra is, “people won't care how much you know until they know how much you care,” and his relationships with his students support this maxim.
Verdi has made a lasting impact on the culture of the accounting specialty and is an important piece of the puzzle with regard to interactions found in the Sloan school. One of his students praised, “the collaborative culture is impressive: I’d call it a family, where faculty and students are very close to each other.” They described that they “share the same office space, have lunches together, and whenever students want feedback, the faculty is willing to help.”
Verdi has sharp research insights, and always wants to help, even when he is swamped with administrative affairs. He makes himself accessible to students, often staying after hours with his door open.
Another mentee said that “he has been organizing weekly PhD lunch seminars for years, online brown-bags among current and previous MIT accounting members during the pandemic, and more recently the annual MIT accounting alumni conference.” Verdi also takes students out for dinner or coffee, caring about how they are doing outside of academics. The student commended, “I feel lucky that Rodrigo is here.”
Accelerating scientific discovery with AIFutureHouse, co-founded by Sam Rodriques PhD ’19, has developed AI agents to automate key steps on the path toward scientific progress.Several researchers have taken a broad view of scientific progress over the last 50 years and come to the same troubling conclusion: Scientific productivity is declining. It’s taking more time, more funding, and larger teams to make discoveries that once came faster and cheaper. Although a variety of explanations have been offered for the slowdown, one is that, as research becomes more complex and specialized, scientists must spend more time reviewing publications, designing sophisticated experiments, and analyzing data.
Now, the philanthropically funded research lab FutureHouse is seeking to accelerate scientific research with an AI platform designed to automate many of the critical steps on the path toward scientific progress. The platform is made up of a series of AI agents specialized for tasks including information retrieval, information synthesis, chemical synthesis design, and data analysis.
FutureHouse founders Sam Rodriques PhD ’19 and Andrew White believe that by giving every scientist access to their AI agents, they can break through the biggest bottlenecks in science and help solve some of humanity’s most pressing problems.
“Natural language is the real language of science,” Rodriques says. “Other people are building foundation models for biology, where machine learning models speak the language of DNA or proteins, and that’s powerful. But discoveries aren’t represented in DNA or proteins. The only way we know how to represent discoveries, hypothesize, and reason is with natural language.”
Finding big problems
For his PhD research at MIT, Rodriques sought to understand the inner workings of the brain in the lab of Professor Ed Boyden.
“The entire idea behind FutureHouse was inspired by this impression I got during my PhD at MIT that even if we had all the information we needed to know about how the brain works, we wouldn’t know it because nobody has time to read all the literature,” Rodriques explains. “Even if they could read it all, they wouldn’t be able to assemble it into a comprehensive theory. That was a foundational piece of the FutureHouse puzzle.”
Rodriques wrote about the need for new kinds of large research collaborations as the last chapter of his PhD thesis in 2019, and though he spent some time running a lab at the Francis Crick Institute in London after graduation, he found himself gravitating toward broad problems in science that no single lab could take on.
“I was interested in how to automate or scale up science and what kinds of new organizational structures or technologies would unlock higher scientific productivity,” Rodriques says.
When Chat-GPT 3.5 was released in November 2022, Rodriques saw a path toward more powerful models that could generate scientific insights on their own. Around that time, he also met Andrew White, a computational chemist at the University of Rochester who had been granted early access to Chat-GPT 4. White had built the first large language agent for science, and the researchers joined forces to start FutureHouse.
The founders started out wanting to create distinct AI tools for tasks like literature searches, data analysis, and hypothesis generation. They began with data collection, eventually releasing PaperQA in September 2024, which Rodriques calls the best AI agent in the world for retrieving and summarizing information in scientific literature. Around the same time, they released Has Anyone, a tool that lets scientists determine if anyone has conducted specific experiments or explored specific hypotheses.
“We were just sitting around asking, ‘What are the kinds of questions that we as scientists ask all the time?’” Rodriques recalls.
When FutureHouse officially launched its platform on May 1 of this year, it rebranded some of its tools. Paper QA is now Crow, and Has Anyone is now called Owl. Falcon is an agent capable of compiling and reviewing more sources than Crow. Another new agent, Phoenix, can use specialized tools to help researchers plan chemistry experiments. And Finch is an agent designed to automate data driven discovery in biology.
On May 20, the company demonstrated a multi-agent scientific discovery workflow to automate key steps of the scientific process and identify a new therapeutic candidate for dry age-related macular degeneration (dAMD), a leading cause of irreversible blindness worldwide. In June, FutureHouse released ether0, a 24B open-weights reasoning model for chemistry.
“You really have to think of these agents as part of a larger system,” Rodriques says. “Soon, the literature search agents will be integrated with the data analysis agent, the hypothesis generation agent, an experiment planning agent, and they will all be engineered to work together seamlessly.”
Agents for everyone
Today anyone can access FutureHouse’s agents at platform.futurehouse.org. The company’s platform launch generated excitement in the industry, and stories have started to come in about scientists using the agents to accelerate research.
One of FutureHouse’s scientists used the agents to identify a gene that could be associated with polycystic ovary syndrome and come up with a new treatment hypothesis for the disease. Another researcher at the Lawrence Berkeley National Laboratory used Crow to create an AI assistant capable of searching the PubMed research database for information related to Alzheimer’s disease.
Scientists at another research institution have used the agents to conduct systematic reviews of genes relevant to Parkinson’s disease, finding FutureHouse’s agents performed better than general agents.
Rodriques says scientists who think of the agents less like Google Scholar and more like a smart assistant scientist get the most out of the platform.
“People who are looking for speculation tend to get more mileage out of Chat-GPT o3 deep research, while people who are looking for really faithful literature reviews tend to get more out of our agents,” Rodriques explains.
Rodriques also thinks FutureHouse will soon get to a point where its agents can use the raw data from research papers to test the reproducibility of its results and verify conclusions.
In the longer run, to keep scientific progress marching forward, Rodriques says FutureHouse is working on embedding its agents with tacit knowledge to be able to perform more sophisticated analyses while also giving the agents the ability to use computational tools to explore hypotheses.
“There have been so many advances around foundation models for science and around language models for proteins and DNA, that we now need to give our agents access to those models and all of the other tools people commonly use to do science,” Rodriques says. “Building the infrastructure to allow agents to use more specialized tools for science is going to be critical.”
MIT and Mass General Brigham launch joint seed program to accelerate innovations in healthThe MIT-MGB Seed Program, launched with support from Analog Devices Inc., will fund joint research projects that advance technology and clinical research.Leveraging the strengths of two world-class research institutions, MIT and Mass General Brigham (MGB) recently celebrated the launch of the MIT-MGB Seed Program. The new initiative, which is supported by Analog Devices Inc. (ADI), will fund joint research projects led by researchers at MIT and Mass General Brigham. These collaborative projects will advance research in human health, with the goal of developing next-generation therapies, diagnostics, and digital tools that can improve lives at scale.
The program represents a unique opportunity to dramatically accelerate innovations that address some of the most urgent challenges in human health. By supporting interdisciplinary teams from MIT and Mass General Brigham, including both researchers and clinicians, the seed program will foster groundbreaking work that brings together expertise in artificial intelligence, machine learning, and measurement and sensing technologies with pioneering clinical research and patient care.
“The power of this program is that it combines MIT’s strength in science, engineering, and innovation with Mass General Brigham’s world-class scientific and clinical research. With the support and incentive to work together, researchers and clinicians will have the freedom to tackle compelling problems and find novel ways to overcome them to achieve transformative changes in patient care,” says Sally Kornbluth, president of MIT.
“The MIT-MGB Seed Program will enable cross-disciplinary collaboration to advance transformative research and breakthrough science. By combining the collective strengths and expertise of our great institutions, we can transform medical care and drive innovation and discovery with speed,” says Anne Klibanski, president and CEO of Mass General Brigham.
The initiative is funded by a gift from ADI. Over the next three years, the ADI Fund for Health and Life Sciences will support approximately six joint projects annually, with funding split between the two institutions.
“The converging domains of biology, medicine, and computing promise a new era of health-care efficacy, efficiency, and access. ADI has enjoyed a long and fruitful history of collaboration with MIT and Mass General Brigham, and we are excited by this new initiative’s potential to transform the future of patient care,” adds Vincent Roche, CEO and chair of the board of directors at ADI.
In addition to funding, teams selected for the program will have access to entrepreneurial workshops, including some hosted by The Engine — an MIT-built venture firm focused on tough tech. These sessions will connect researchers with company founders, investors, and industry leaders, helping them chart a path from breakthrough discoveries in the lab to real-world impact.
The program will launch an open call for proposals to researchers at MIT and Mass General Brigham. The first cohort of funded projects is expected to launch in fall 2025. Awardees will be selected by a joint review committee composed of MIT and Mass General Brigham experts.
According to MIT’s faculty lead for the MIT-MGB Seed Program, Alex K. Shalek, building collaborative research teams with leaders from both institutions could help fill critical gaps that often impede innovation in health and life sciences. Shalek also serves as director of the Institute for Medical Engineering & Science (IMES), the J. W. Kieckhefer Professor in IMES and Chemistry, and an extramural member of the Koch Institute for Integrative Cancer Research.
“Clinicians often see where current interventions fall short, but may lack the scientific tools or engineering expertise needed to develop new ones. Conversely, MIT researchers may not fully grasp these clinical challenges or have access to the right patient data and samples,” explains Shalek, who is also a member of the Ragon Institute of Mass General Brigham, MIT, and Harvard. “By supporting bilateral collaborations and building a community across disciplines, this program is poised to drive critical advances in diagnostics, therapeutics, and AI-driven health applications.”
Emery Brown, a practicing anesthesiologist at Massachusetts General Hospital, will serve alongside Shalek as Mass General Brigham’s faculty lead for the program.
“The MIT-MGB Seed Program creates a perfect storm. The program will provide an opportunity for MIT faculty to bring novel science and engineering to attack and solve important clinical problems,” adds Brown, who is also the Edward Hood Taplin Professor of Medical Engineering and Computational Neuroscience at MIT. “The pursuit of solutions to important and challenging clinical problems by Mass General Brigham physicians and scientists will no doubt spur MIT scientists and engineers to develop new technologies, or find novel applications of existing technologies.”
The MIT-MGB Seed Program is a flagship initiative in the MIT Health and Life Sciences Collaborative (MIT HEALS). It reflects MIT HEALS’ core mission to establish MIT as a central hub for health and life sciences innovation and translation, and to leverage connections with other world-class research institutions in the Boston area.
“This program exemplifies the power of interdisciplinary research,” says Anantha Chandrakasan, MIT’s chief innovation and strategy officer, dean of engineering, and head of MIT HEALS. “It creates a critical bridge between clinical practice and technological innovation — two areas that must be deeply connected to advance real-world solutions.”
The program’s launch was celebrated at a special event at MIT’s Samberg Conference Center on March 31.
Using generative AI to help robots jump higher and land safelyMIT CSAIL researchers combined GenAI and a physics simulation engine to refine robot designs. The result: a machine that out-jumped a robot designed by humans.Diffusion models like OpenAI’s DALL-E are becoming increasingly useful in helping brainstorm new designs. Humans can prompt these systems to generate an image, create a video, or refine a blueprint, and come back with ideas they hadn’t considered before.
But did you know that generative artificial intelligence (GenAI) models are also making headway in creating working robots? Recent diffusion-based approaches have generated structures and the systems that control them from scratch. With or without a user’s input, these models can make new designs and then evaluate them in simulation before they’re fabricated.
A new approach from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) applies this generative know-how toward improving humans’ robotic designs. Users can draft a 3D model of a robot and specify which parts they’d like to see a diffusion model modify, providing its dimensions beforehand. GenAI then brainstorms the optimal shape for these areas and tests its ideas in simulation. When the system finds the right design, you can save and then fabricate a working, real-world robot with a 3D printer, without requiring additional tweaks.
The researchers used this approach to create a robot that leaps up an average of roughly 2 feet, or 41 percent higher than a similar machine they created on their own. The machines are nearly identical in appearance: They’re both made of a type of plastic called polylactic acid, and while they initially appear flat, they spring up into a diamond shape when a motor pulls on the cord attached to them. So what exactly did AI do differently?
A closer look reveals that the AI-generated linkages are curved, and resemble thick drumsticks (the musical instrument drummers use), whereas the standard robot’s connecting parts are straight and rectangular.
Better and better blobs
The researchers began to refine their jumping robot by sampling 500 potential designs using an initial embedding vector — a numerical representation that captures high-level features to guide the designs generated by the AI model. From these, they selected the top 12 options based on performance in simulation and used them to optimize the embedding vector.
This process was repeated five times, progressively guiding the AI model to generate better designs. The resulting design resembled a blob, so the researchers prompted their system to scale the draft to fit their 3D model. They then fabricated the shape, finding that it indeed improved the robot’s jumping abilities.
The advantage of using diffusion models for this task, according to co-lead author and CSAIL postdoc Byungchul Kim, is that they can find unconventional solutions to refine robots.
“We wanted to make our machine jump higher, so we figured we could just make the links connecting its parts as thin as possible to make them light,” says Kim. “However, such a thin structure can easily break if we just use 3D printed material. Our diffusion model came up with a better idea by suggesting a unique shape that allowed the robot to store more energy before it jumped, without making the links too thin. This creativity helped us learn about the machine’s underlying physics.”
The team then tasked their system with drafting an optimized foot to ensure it landed safely. They repeated the optimization process, eventually choosing the best-performing design to attach to the bottom of their machine. Kim and his colleagues found that their AI-designed machine fell far less often than its baseline, to the tune of an 84 percent improvement.
The diffusion model’s ability to upgrade a robot’s jumping and landing skills suggests it could be useful in enhancing how other machines are designed. For example, a company working on manufacturing or household robots could use a similar approach to improve their prototypes, saving engineers time normally reserved for iterating on those changes.
The balance behind the bounce
To create a robot that could jump high and land stably, the researchers recognized that they needed to strike a balance between both goals. They represented both jumping height and landing success rate as numerical data, and then trained their system to find a sweet spot between both embedding vectors that could help build an optimal 3D structure.
The researchers note that while this AI-assisted robot outperformed its human-designed counterpart, it could soon reach even greater new heights. This iteration involved using materials that were compatible with a 3D printer, but future versions would jump even higher with lighter materials.
Co-lead author and MIT PhD student and CSAIL affiliate Tsun-Hsuan “Johnson” Wang says the project is a jumping-off point for new robotics designs that generative AI could help with.
“We want to branch out to more flexible goals,” says Wang. “Imagine using natural language to guide a diffusion model to draft a robot that can pick up a mug, or operate an electric drill.”
Kim says that a diffusion model could also help to generate articulation and ideate on how parts connect, potentially improving how high the robot would jump. The team is also exploring the possibility of adding more motors to control which direction the machine jumps and perhaps improve its landing stability.
The researchers’ work was supported, in part, by the National Science Foundation’s Emerging Frontiers in Research and Innovation program, the Singapore-MIT Alliance for Research and Technology’s Mens, Manus and Machina program, and the Gwangju Institute of Science and Technology (GIST)-CSAIL Collaboration. They presented their work at the 2025 International Conference on Robotics and Automation.
Summer 2025 reading from MITEnjoy these recent titles from Institute faculty and staff.Summer is the perfect time to curl up with a good book — and MIT authors have had much to offer in the past year. The following titles represent some of the books published in the past 12 months by MIT faculty and staff. In addition to links for each book from its publisher, the MIT Libraries has compiled a helpful list of the titles held in its collections.
Looking for more literary works from the MIT community? Enjoy our book lists from 2024, 2023, 2022, and 2021.
Happy reading!
Science
“So Very Small: How Humans Discovered the Microcosmos, Defeated Germs — and May Still Lose the War Against Infectious Disease” (Penguin Random House, 2025)
By Thomas Levenson, professor of science writing
For centuries, people in the West, believing themselves to hold God-given dominion over nature, thought too much of humanity and too little of microbes. Nineteenth-century scientists finally made the connection. Life-saving methods to control infections and contain outbreaks soon followed. Next came the antibiotic era in the 1930s. Yet, less than a century later, the promise of that revolution is receding due to years of overuse. Is our self-confidence getting the better of us again?
“The Miraculous from the Material: Understanding the Wonders of Nature” (Penguin Random House, 2024)
By Alan Lightman, professor of the practice of humanities
Nature is capable of extraordinary phenomena. Standing in awe of those phenomena, we experience a feeling of connection to the cosmos. For Lightman, just as remarkable is that all of what we see around us — soap bubbles, scarlet ibises, shooting stars — are made out of the same material stuff and obey the same rules and laws. Pairing 36 full-color photos evoking some of nature’s most awe-inspiring phenomena with personal essays, “The Miraculous from the Material” explores the fascinating science underlying the natural world.
Technology and society
“The Analytics Edge in Healthcare” (Dynamic Ideas, 2025)
By Dimitris Bertsimas, vice provost for MIT Open Learning, Boeing Leaders for Global Operations Professor of Management, associate dean for business analytics, and professor of operations research; Agni Orfanoudaki, and Holly Wiberg
Analytics is transforming health care operations, empowering medical professionals and administrators to leverage data and models to make better decisions. This book provides a practical introduction to this exciting field. The first part establishes the technical foundations of health care analytics, spanning machine learning and optimization. The second part presents integrated case studies that cover a wide range of clinical specialties and problem types using descriptive, predictive, and prescriptive analytics.
“Longevity Hubs: Regional Innovation for Global Aging” (MIT Press, 2024)
Edited by Joseph F. Coughlin, senior research scientist and MIT AgeLab director, and Luke Yoquinto, MIT AgeLab research associate
Populations around the world are aging, and older adults’ economic influence stands to grow markedly in future decades. This volume brings together entrepreneurs, researchers, designers, public servants, and others to address the multifaceted concerns of aging societies and to explore the possibility that certain regions will distinguish themselves as longevity hubs: home to disproportionate economic and innovative activity for older populations.
“Data, Systems, and Society: Harnessing AI for Societal Good” (Cambridge University Press, 2025)
By Munther Dahleh, the William A. Coolidge Professor of Electrical Engineering and Computer Science and director of the Institute for Data, Systems, and Society (IDSS)
Harnessing the power of data and artificial intelligence (Al) methods to tackle complex societal challenges requires transdisciplinary collaborations across academia, industry, and government. In this book, Dahleh, founder of the MIT Institute for Data, Systems, and Society (IDSS), offers a blueprint for researchers, professionals, and institutions to create approaches to problems of high societal value using innovative, holistic, data-driven methods.
“SuperShifts: Transforming How We Live, Learn, and Work in the Age of Intelligence” (Wiley, 2025)
By Ja-Naé Duane, academic research fellow at the MIT Center for Information Systems Research, and Steve Fisher
This book describes how we’re at the end of one 200-year arc and embarking on another. With this new age of intelligence, Duane and Fisher highlight the catalysts for change currently affecting individuals, businesses, and society as a whole. They also provide a model for transformation that utilizes a holistic view of making radical change through three lenses: you as a leader, your organization, and society.
“Tech Agnostic: How Technology Became the World’s Most Powerful Religion, and Why It Desperately Needs a Reformation” (MIT Press, 2024)
By Greg Epstein, humanist chaplain
Today’s technology has overtaken religion as the chief influence on 21st-century life and community. In “Tech Agnostic,” Epstein explores what it means to be a critical thinker with respect to this new faith. Encouraging readers to reassert their common humanity beyond the seductive sheen of “tech,” this book argues for tech agnosticism — not worship — as a way of life.
“The New Lunar Society: An Enlightenment Guide to the Next Industrial Revolution” (MIT Press, 2025)
By David Mindell, the Dibner Professor of the History of Engineering and Manufacturing and professor of aeronautics and astronautics
Climate change, global disruption, and labor scarcity are forcing us to rethink the underlying principles of industrial society. In this book, Mindell envisions this new industrialism from the fundamentals, drawing on the 18th century when first principles were formed at the founding of the Industrial Revolution. While outlining the new industrialism, he tells the story of the Lunar Society, a group of engineers, scientists, and industrialists who came together to apply the principles of the Enlightenment to industrial processes.
“Output: An Anthology of Computer-Generated Text, 1953–2023” (MIT Press, 2024)
Edited by Nick Montfort, professor of digital media, and Lillian-Yvonne Bertram
The discussion of computer-generated text has recently reached a fever pitch but largely omits the long history of work in this area — text generation, as it happens, was not invented yesterday in Silicon Valley. This anthology aims to correct that omission by gathering seven decades of English-language texts produced by generation systems and software, long before ChatGPT and Claude.
Education, work, and innovation
“Retiring: Creating a Life That Works for You” (Routledge, 2025)
By Lotte Bailyn, the T Wilson Professor of Management, Emerita and professor emerita of work and organization studies; Teresa M. Amabile; Marcy Crary; Douglas T. Hall; and Kathy E. Kram
Whether they’re one of the 73 million baby boomers reaching their full retirement benefit age or zoomers just entering the workforce, at some point most working Americans will retire. The optimal approach to retirement is unique to each person, but this book offers wisdom and anecdotes from more than 120 people and detailed interviews with 14 “stars” regarding their retirement transitions.
“Accelerating Innovation: Competitive Advantage through Ecosystem Engagement” (MIT Press, 2025)
By Phil Budden, senior lecturer of technological Innovation, entrepreneurship, and strategic management; and Fiona Murray, associate dean for innovation, the William Porter Professor of Entrepreneurship, and professor of technological innovation, entrepreneurship, and strategic management
Leaders in large organizations face continuous pressure to innovate, and few possess the internal resources needed to keep up with rapid advances in science and technology. But looking beyond their own organizations, most face a bewildering landscape of external resources. In “Accelerating Innovation,” leaders will find a practical guide to this external landscape. Budden and Murray provide directions for navigating innovation ecosystems — those hotspots worldwide where researchers, entrepreneurs, and investors congregate.
“Writing, Thinking, and the Brain: How Neuroscience Can Improve Writing Instruction” (Teachers College Press, 2024)
By Jovi R. S. Nazareno, learning science and education outreach specialist at MIT Open Learning; Tracey Tokuhama-Espinosa; and Christopher Rappleye
Writing is the highest form of thinking, as evidenced by neuroimaging that shows how more neural networks are activated simultaneously during writing than during any other cognitive activity. This book will help teachers understand how the brain learns to write by unveiling 15 stages of thinking that underpin the writing process, along with targeted ways to stimulate them to maximize each individual’s writing potential.
“Entrepreneurship: Choice and Strategy” (Norton Economics, 2024)
By Erin L. Scott, senior lecturer of technological innovation, entrepreneurship, and strategic management; Scott Stern, the David Sarnoff Professor of Management of Technology and professor of technological innovation, entrepreneurship, and strategic management; and Joshua Gans
Building on more than two decades of academic research with thousands of companies and MIT students, Scott, Stern, and Gans have developed a systematic approach for startup leadership. They detail four key choices entrepreneurs must make, and “four strategic approaches to find and frame opportunities.”
“Failure by Design: The California Energy Crisis and the Limits of Market Planning” (University of Chicago, 2024)
By Georg Rilinger, the Fred Kayne Career Development Assistant Professor of Entrepreneurship and assistant professor of technological innovation, entrepreneurship, and strategic management
The California electricity crisis in 2000 caused billions in losses and led to bankruptcy for one of the state’s largest utilities. More than 20 years later, the question remains: Why did the newly created electricity markets fail? In “Failure by Design,” Rilinger explores practical obstacles to market design to offer a new explanation for the crisis — one that moves beyond previous interpretations that have primarily blamed incompetent politicians or corrupt energy sellers.
Culture, humanities, and social sciences
“Chasing the Pearl-Manuscript: Speculation, Shapes, Delight” (University of Chicago Press, 2025)
By Arthur Bahr, professor of literature
In this book, Bahr explores the four poems and 12 illustrations of the “Pearl-Manuscript,” the only surviving medieval copy of two of the best-known Middle English poems: “Pearl” and “Sir Gawain and the Green Knight.” He explores how the physical manuscript enhances our perception of the poetry, drawing on recent technological advances that show it to be a more complex piece of material, visual, and textual art than previously understood. By connecting the manuscript’s construction to the text’s intricate language, Bahr suggests new ways to understand the power of poetry.
“Taxation and Resentment: Race, Party, and Class in American Tax Attitudes” (Princeton University Press, 2025)
By Andrea Campbell, the Arthur and Ruth Sloan Professor of Political Science
Most Americans want the rich to pay more to fund government, yet favor regressive over progressive taxes. Why this policy-preference gap? In this book, Campbell describes how convoluted tax code confuses the public about who pays and who benefits, so tax preferences do not turn on principles, interests, or even party. Instead, race and racism play large roles, and tax skepticism among Americans of all stripes helps the rich and anti-tax forces undermine progressivity.
“Uprooted: How post-WWII Population Transfers Remade Europe” (Cambridge University Press, 2024)
By Volha Charnysh, the Ford Career Development Associate Professor of Political Science
Each year, millions of people are uprooted from their homes by wars, repression, natural disasters, and climate change. In “Uprooted,” Charnysh presents a fresh perspective on the consequences of mass displacement, arguing that accommodating the displaced population can strengthen receiving states and benefit local economies. With rich insights and compelling evidence, the book challenges common assumptions about the costs of forced displacement and cultural diversity and proposes a novel mechanism linking wars to state-building.
“Crime, Insecurity, and Community Policing: Experiments on Building Trust” (Cambridge University Press, 2024)
By Fotini Christia, the Ford International Professor of the Social Sciences; Graeme Blair; and Jeremy M. Weinstein
How can societies reduce crime without exacerbating adversarial relationships between the police and citizens? Through field experiments in a variety of political contexts, this book presents the outcome of a major research initiative into the efficacy of community policing. Scholars uncover whether, and under what conditions, this influential strategy for tackling crime and insecurity is effective. With its highly innovative approach to cumulative learning, this writing represents a new frontier in the study of police reform.
“Letterlocking: The Hidden History of the Letter” (MIT Press, 2025)
By Jana Dambrogio, the Thomas F. Peterson Conservator at MIT Libraries, and Daniel Starza Smith
Before the invention of the gummed envelope in the 1830s, how did people secure their private letters? The answer is letterlocking — the ingenious process of securing a letter using a combination of folds, tucks, slits, or adhesives such as sealing wax, so that it becomes its own envelope. In this book, Dambrogio and Starza Smith, experts who have pioneered the field over the last 10 years, tell the fascinating story of letterlocking within epistolary history, drawing on real historical examples from all over the world.
“Long-Term Care around the World” (University of Chicago Press, 2025)
Edited by Jonathan Gruber, the Ford Professor of Economics and head of the Department of Economics, and Kathleen McGarry
As formal long-term care becomes unaffordable for seniors in many countries, public systems and unpaid caregivers increasingly bear the burden of supporting the world’s aging population. “Long-Term Care around the World” is a comparative analysis of long-term care in 10 wealthy countries that considers the social costs of both formal and informal care —which is critical, given that informal unpaid care is estimated to account for one-third of all long-term care spending.
“Empty Vessel: The Global Economy in One Barge” (Penguin Random House, 2025)
By Ian Kumekawa, lecturer of history
What do a barracks for British troops in the Falklands War, a floating jail off the Bronx, and temporary housing for VW factory workers in Germany have in common? The Balder Scapa: a single barge that served all three roles. Through this one vessel, Kumekawa illustrates many currents: globalization, the transience of economic activity, and the hazy world of transactions many call “the offshore,” the lightly regulated sphere of economic activity that encourages short-term actions.
“The Price of Our Values: The Economic Limits of Moral Life” (University of Chicago Press, 2025)
By David Thesmar, the Franco Modigliani Professor of Financial Economics and professor of finance, and Augustin Landier
Two economists examine the interplay between our desire to be good, the personal costs of being good, and the point at which people abandon goodness due to its costs. Aided by the results of two surveys, they find that the answers to modern moral dilemmas are economic, and often highly predictable. Our values may guide us, but we are also forced to consider economic costs to settle decisions.
“Spheres of Injustice: The Ethical Promise of Minority Presence” (MIT Press, 2025)
By Bruno Perreau, the Cynthia L. Reed Professor of French Studies
How can the rights of minorities be protected in democracies? The question has been front and center in the U.S. since the Supreme Court’s repeal of affirmative action. In Europe too, minority politics are being challenged. The very notion of “minority” is being questioned, while the notion of a “protected class” risks encouraging competition among minorities. In “Spheres of Injustice,” Perreau demonstrates how we can make the fight against discrimination beneficial for all.
“Attention, Shoppers! American Retail Capitalism and the Origins of the Amazon Economy” (Princeton University Press, 2025)
By Kathleen Thelen, the Ford Professor of Political Science
This book traces the evolution of U.S. retailing from the late 19th century to today, uncovering the roots of a bitter equilibrium where large low-cost retailers dominate and vast numbers of low-income families now rely on them to make ends meet. Thelen reveals how large discount retailers have successfully exploited a uniquely permissive regulatory landscape to create a shopper’s paradise built on cheap labor.
“Routledge Handbook of Space Policy” (Routledge, 2024)
Chapter by Danielle R. Wood, associate professor in the program in media arts and sciences and associate professor in aeronautics and astronautics
In her chapter, “The Expanding Sphere of Human Responsibility for Sustainability on Earth and in Space,” Wood proposes a multifaceted definition of sustainability and explores how the definition can be exercised as humans expand activity in space. Building on the tradition of consensus building on concepts of sustainable development through United Nations initiatives, Wood asserts that sustainability for human activity in space requires consideration of three types of responsibility: economic, social, and environmental.
“Victorian Parlour Games: A Modern Host’s Guide to Classic Fun for Everyone” (Chronicle Books, 2024)
By Ned Wolfe, marketing and communications assistant at MIT Libraries
“Victorian Parlour Games” is a beautifully designed and compact hardcover volume full of the classic, often silly, games played in the late 19th century. The Victorians loved fun and played hundreds and hundreds of party games. This endlessly delightful party games book collects some of the very best for your reference and pleasure.
Arts, architecture, planning, and design
“Against Reason: Tony Smith, Sculpture, and Other Modernisms” (MIT Press, 2024)
Chapter by Judith Barry, professor in the Art, Culture, and Technology Program, with Kelli Anderson
This collection of essays reveals the depth and complexity of the sculpture of American modernist Tony Smith, placing his multifaceted practice in dialogue with contemporary voices. Barry’s chapter, "New Piece: Elective Geometries," describes the transformation of Smith’s sculpture into the form of a flipbook and centerpiece “pop-up.”
“Steina” (MIT Press, 2025)
Edited by Natalie Bell, curator at the MIT List Visual Arts Center
Accompanying the related exhibition at MIT List Visual Arts Center and Buffalo AKG Art Museum, “Steina” brings renewed recognition to Steina (b. 1940, Iceland), tracing her oeuvre from early collaborative works with her partner Woody Vasulka to her independent explorations of optics and a liberated, non-anthropocentric subjectivity.
“Jewish Theatrical Resources: A Guide for Theaters Producing Jewish Work” (Alliance for Jewish Theater, 2025)
Chapter by Marissa Friedman, marketing and communications manager in the Art, Culture, and Technology Program; Jenna Clark Embry; Robin Goldberg; Gabrielle Hoyt; Stephanie Kane; Alix Rosenfeld; and Marissa Shadburn
Produced by the Alliance for Jewish Theatre, this guide was created to help non-Jewish theaters produce Jewish plays with authenticity, cultural awareness, and care. Friedman contributes a chapter on dramaturgy, exploring how the primary role of a dramaturg is to support a playwright and production team in articulating their artistic vision, and setting forth an ideal model for the dramaturgy of a Jewish play, with both a theatrical dramaturg and a Jewish dramaturg.
“Play It Again, Sam: Repetition in the Arts” (MIT Press, 2025)
By Samuel Jay Keyser, the Peter de Florez emeritus professor of linguistics
Leonard Bernstein, in his famous Norton Lectures, extolled repetition, saying that it gave poetry its musical qualities and that music theorists’ refusal to take it seriously did so at their peril. “Play It Again, Sam” takes Bernstein seriously. In this book, Keyser explores why we enjoy works of poetry, music, and painting, and how repetition plays a central part in the pleasure.
“The Moving Image: A User’s Manual” (MIT Press, 2025)
By Peter B. Kaufman, associate director of development at MIT Open Learning
Video is today’s most popular information medium. Two-thirds of the world’s internet traffic is video. Americans get their news and information more often from screens and speakers than through any other means. “The Moving Image” is the first authoritative account of how we have arrived here, together with the first definitive manual to help writers, educators, and publishers use video more effectively.
“Beyond Ruins: Reimagining Modernism” (ArchiTangle, 2024)
Edited by Raafat Majzoub SM ’17, visiting lecturer at the Art, Culture, and Technology Program; and Nicolas Fayad
This book explores the renovation of modern architecture in the Global South as a tool for self-determination and community-building. Focusing on the Oscar Niemeyer Guest House in Tripoli, Lebanon, Majzoub and Fayad examine heritage as a political and material process. Through case studies, visual essays, and conversations with architects, artists, and theorists, the book addresses challenges of preservation, gaps in archiving, and the need for new forms of architectural practice.
“The Equitably Resilient City: Solidarities and Struggles in the Face of Climate Crisis” (MIT Press, 2024)
By Lawrence J. Vale, the Ford Professor of Urban Design and Planning and associate dean of the MIT School of Architecture and Planning; and Zachary B. Lamb
Too often the places most vulnerable to climate change are those that are home to people with the fewest economic and political resources. And while some leaders are starting to take action to reduce climate risks, many early adaptation schemes have actually made preexisting inequalities worse. In this book, Vale and Lamb ask how cities can adapt to climate change and other threats while also doing right by disadvantaged residents.
Novel and biography
“The Novice of Thanatos: An Epic Dark Fantasy of Horror, Death, and Necromancy” (Satirrell Publishing, 2025)
By Scott Austin Tirrell, director of administration and finance at the Art, Culture, and Technology Program
A fantasy novel that follows 11-year-old Mishal, a gifted yet troubled boy inducted into the secretive Order of Thanatos. Set in the grim and mystic realm of Lucardia, the story is framed as a first-person memoir chronicling Mishal’s initiation as a novice psychopomp — one who guides the dead across the Threshold into the afterlife. As Mishal navigates the Order’s rigid hierarchy, academic rigor, and spiritual mysteries, he begins to uncover unsettling truths about death, the soul, and the hidden agendas of those in power. Haunted by a spirit he cannot abandon and burdened by a forbidden artifact, Mishal must decide whom to trust and what to believe as his abilities grow — and as the line between duty and damnation begins to blur.
For young readers
“I Love You Bigger Than Everything That’s Big” (Stillwater River Publications, 2024)
By Lindsay Bartholomew, exhibit content and experience developer at MIT Museum, and illustrated by Sequoia Bostick
How much can you love someone? Higher than you can reach? Longer than a river? Bigger than the sky? The real answer — bigger than everything that’s big!
“A Century for Caroline” (Denene Millner Books / Simon and Schuster, 2025)
By Kaija Langley, director of development at MIT Libraries, and illustrated by TeMika Grooms
A great-grandma imparts the wisdom gained over her 100 years to an eager little girl in this tender picture book tribute to family and living a long, purposeful, beautiful life.
“All the Rocks We Love” (Penguin Random House, 2024)
By Taylor Perron, the Cecil and Ida Green Professor of Earth, Atmospheric and Planetary Sciences, and Lisa Varchol Perron, and illustrated by David Scheirer
It’s no secret that children love rocks: They appear in jacket pockets, on windowsills, in the car, in their hiding places, and most often, in little grips. This book is an appreciation of rocks’ versatility and appeal, paired with the presentation of real types of rocks and their play-worthy attributes.
Accelerating hardware development to improve national security and innovationThe alumni-founded startup Nominal has built a platform for building and testing complex systems like fighter jets, nuclear reactors, rockets, and robots.Modern fighter jets contain hundreds or even thousands of sensors. Some of those sensors collect data every second, others every nanosecond. For the engineering teams building and testing those jets, all those data points are hugely valuable — if they can make sense of them.
Nominal is an advanced software platform made for engineers building complex systems ranging from fighter jets to nuclear reactors, satellites, rockets, and robots. Nominal’s flagship product, Nominal Core, helps teams organize, visualize, and securely share data from tests and operations. The company’s other product, Nominal Connect, helps engineers build custom applications for automating and syncing their hardware systems.
“It’s a very technically challenging problem to take the types of data that our customers are generating and get them into a single place where people can collaborate and get insights,” says Nominal co-founder Jason Hoch ’13. “It’s hard because you’re dealing with a lot of different data sources, and you want to be able to correlate those sources and apply mathematical formulas. We do that automatically.”
Hoch started Nominal with Cameron McCord ’13, SM ’14 and Bryce Strauss after the founders had to work with generic data tools or build their own solutions at places like Lockheed Martin and Anduril. Today, Nominal is working with organizations in aerospace, defense, robotics, manufacturing, and energy to accelerate the development of products critical for applications in U.S. national security and beyond.
“We built Nominal to take the best innovations in software and data technology and tailor them to the workflows that engineers go through when building and testing hardware systems,” McCord says. “We want to be the data and software backbone across all of these types of organizations.”
Accelerating hardware development
Hoch and McCord met during their first week at MIT and joined the same fraternity as undergraduates. Hoch double majored in mathematics and computer science and engineering, and McCord participated in the Navy Reserve Officers’ Training Corps (NROTC) while majoring in physics and nuclear science and engineering.
“MIT let me flex my technical skills, but I was also interested in the broader implications of technology and national security,” McCord says. “It was an interesting balance where I was learning the hardcore engineering skills, but always having a wider aperture to understand how the technology I was learning about was going to impact the world.”
Following MIT, McCord spent eight years in the Navy before working at the defense technology company Anduril, where he was charged with building the software systems to test different products. Hoch also worked at the intelligence and defense-oriented software company Palantir.
McCord met Strauss, who had worked as an engineer at Lockheed Martin, while the two were at Harvard Business School. The eventual co-founders realized they had each struggled with software during complex hardware development projects, and set out to build the tools they wished they’d had.
At the heart of Nominal’s platform is a unified database that can connect and organize hundreds of data sources in real-time. Nominal’s system allows engineers to search through or visualize that information, helping them spot trends, catch critical events, and investigate anomalies — what Nominal’s team describes as learning the rules governing complex systems.
“We’re trying to get answers to engineers so they understand what’s happening and can keep projects moving forward,” says Strauss. “Testing and validating these systems are fundamental bottlenecks for hardware progress. Our platform helps engineers answer questions like, ‘When we made a 30-degree turn at 16,000 feet, what happened to the engine’s temperature, and how does that compare to what happened yesterday?’”
By automating tasks like data stitching and visualization, Nominal’s platform helps accelerate post-test analysis and development processes for complex systems. And because the platform is cloud-hosted, engineers can easily share visualizations and other dynamic assets with members of their team as opposed to making static reports, allowing more people in an organization to interact directly with the data.
From satellites to drones, robots to rockets
Nominal recently announced a $75 million Series B funding round, led by Sequoia Capital, to accelerate their growth.
“We’ll use the funds to accelerate product roadmaps for our existing products, launch new products across the hardware test stack, and more than double our team,” says McCord.
Today, aerospace customers are using Nominal’s platform to monitor their assets in orbit. Manufacturers are using Nominal to make sure their components work as expected before they’re integrated into larger systems. Nuclear fusion companies are using Nominal to understand when their parts might fail due to heat.
“The products we’ve built are transferrable,” Hoch says. “It doesn’t matter if you’re building a nuclear fusion reactor or a satellite, those teams can benefit from the Nominal tool chain.”
Ultimately the founders believe the platform helps create better products by enabling a data-driven, iterative design process more commonly seen in the software development industry.
“The concept of continuous integration and development in software revolutionized the industry 20 years ago. Before that, it was common to build software in large, slow batches – developing for months, then testing and releasing all at once,” Strauss explains. “We’re bringing continuous testing to hardware. It’s about constantly creating that feedback loop to improve performance. It’s a new paradigm for how hardware is built. We’ve seen companies like SpaceX do this well to move faster and outpace the competition. Now, that approach is available to everyone.”
Four from MIT named 2025 Goldwater ScholarsRising seniors Avani Ahuja, Julianna Lian, Jacqueline Prawira, and Alex Tang are honored for their academic achievements.Four MIT rising seniors have been selected to receive a 2025 Barry Goldwater Scholarship, including Avani Ahuja and Jacqueline Prawira in the School of Engineering and Julianna Lian and Alex Tang from the School of Science. An estimated 5,000 college sophomores and juniors from across the United States were nominated for the scholarships, of whom only 441 were selected.
The Goldwater Scholarships have been conferred since 1989 by the Barry Goldwater Scholarship and Excellence in Education Foundation. These scholarships have supported undergraduates who go on to become leading scientists, engineers, and mathematicians in their respective fields.
Avani Ahuja, a mechanical engineering and electrical engineering major, conducts research in the Conformable Decoders group, where she is focused on developing a “wearable conformable breast ultrasound patch” that makes ultrasounds for breast cancer more accessible.
“Doing research in the Media Lab has had a huge impact on me, especially in the ways that we think about inclusivity in research,” Ahuja says.
In her research group, Ahuja works under Canan Dagdeviren, the LG Career Development Professor of Media Arts and Sciences. Ahuja plans to pursue a PhD in electrical engineering. She aspires to conduct research in electromechanical systems for women’s health applications and teach at the university level.
“I want to thank Professor Dagdeviren for all her support. It’s an honor to receive this scholarship, and it’s amazing to see that women’s health research is getting recognized in this way,” Ahuja says.
Julianna Lian studies mechanochemistry, organic, and polymer chemistry in the lab of Professor Jeremiah Johnson, the A. Thomas Guertin Professor of Chemistry. In addition to her studies, she serves the MIT community as an emergency medical technician (EMT) with MIT Emergency Medical Services, is a member of MIT THINK, and a ClubChem mentorship chair.
“Receiving this award has been a tremendous opportunity to not only reflect on how much I have learned, but also on the many, many people I have had the chance to learn from,” says Lian. “I am deeply grateful for the guidance, support, and encouragement of these teachers, mentors, and friends. And I am excited to carry forward the lasting curiosity and excitement for chemistry that they have helped inspire in me.”
Lian’s career goals post-graduation include pursuing a PhD in organic chemistry, to conduct research at the interface of synthetic chemistry and materials science, aided by computation, and to teach at the university level.
Jacqueline Prawira, a materials science and engineering major, joined the Center of Decarbonization and Electrification of Industry as a first-year Undergraduate Research Opportunities Program student and became a co-inventor on a patent and a research technician at spinout company Rock Zero. She has also worked in collaboration with Indigenous farmers and Diné College students on the Navajo Nation.
“I’ve become significantly more cognizant of how I listen to people and stories, the tangled messiness of real-world challenges, and the critical skills needed to tackle complex sustainability issues,” Prawira says.
Prawira is mentored by Yet-Ming Chiang, professor of materials science and engineering. Her career goals are to pursue a PhD in materials science and engineering and to research sustainable materials and processes to solve environmental challenges and build a sustainable society.
“Receiving the prestigious title of 2025 Goldwater Scholar validates my current trajectory in innovating sustainable materials and demonstrates my growth as a researcher,” Prawira says. “This award signifies my future impact in building a society where sustainability is the norm, instead of just another option.”
Alex Tang studies the effects of immunotherapy and targeted molecular therapy on the tumor microenvironment in metastatic colorectal cancer patients. He is supervised by professors Jonathan Chen at Northwestern University and Nir Hacohen at the Broad Institute of MIT and Harvard.
“My mentors and collaborators have been instrumental to my growth since I joined the lab as a freshman. I am incredibly grateful for the generous mentorship and support of Professor Hacohen and Professor Chen, who have taught me how to approach scientific investigation with curiosity and rigor,” says Tang. “I’d also like to thank my advisor Professor Adam Martin and first-year advisor Professor Angela Belcher for their guidance throughout my undergraduate career thus far. I am excited to carry forward this work as I progress in my career.” Tang intends to pursue physician-scientist training following graduation.
The Scholarship Program honoring Senator Barry Goldwater was designed to identify, encourage, and financially support outstanding undergraduates interested in pursuing research careers in the sciences, engineering, and mathematics. The Goldwater Scholarship is the preeminent undergraduate award of its type in these fields.
Researchers present bold ideas for AI at MIT Generative AI Impact Consortium kickoff eventPresentations targeted high-impact intersections of AI and other areas, such as health care, business, and education.Launched in February of this year, the MIT Generative AI Impact Consortium (MGAIC), a presidential initiative led by MIT’s Office of Innovation and Strategy and administered by the MIT Stephen A. Schwarzman College of Computing, issued a call for proposals, inviting researchers from across MIT to submit ideas for innovative projects studying high-impact uses of generative AI models.
The call received 180 submissions from nearly 250 faculty members, spanning all of MIT’s five schools and the college. The overwhelming response across the Institute exemplifies the growing interest in AI and follows in the wake of MIT’s Generative AI Week and call for impact papers. Fifty-five proposals were selected for MGAIC’s inaugural seed grants, with several more selected to be funded by the consortium’s founding company members.
Over 30 funding recipients presented their proposals to the greater MIT community at a kickoff event on May 13. Anantha P. Chandrakasan, chief innovation and strategy officer and dean of the School of Engineering who is head of the consortium, welcomed the attendees and thanked the consortium’s founding industry members.
“The amazing response to our call for proposals is an incredible testament to the energy and creativity that MGAIC has sparked at MIT. We are especially grateful to our founding members, whose support and vision helped bring this endeavor to life,” adds Chandrakasan. “One of the things that has been most remarkable about MGAIC is that this is a truly cross-Institute initiative. Deans from all five schools and the college collaborated in shaping and implementing it.”
Vivek F. Farias, the Patrick J. McGovern (1959) Professor at the MIT Sloan School of Management and co-faculty director of the consortium with Tim Kraska, associate professor of electrical engineering and computer science in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), emceed the afternoon of five-minute lightning presentations.
Presentation highlights include:
“AI-Driven Tutors and Open Datasets for Early Literacy Education,” presented by Ola Ozernov-Palchik, a research scientist at the McGovern Institute for Brain Research, proposed a refinement for AI-tutors for pK-7 students to potentially decrease literacy disparities.
“Developing jam_bots: Real-Time Collaborative Agents for Live Human-AI Musical Improvisation,” presented by Anna Huang, assistant professor of music and assistant professor of electrical engineering and computer science, and Joe Paradiso, the Alexander W. Dreyfoos (1954) Professor in Media Arts and Sciences at the MIT Media Lab, aims to enhance human-AI musical collaboration in real-time for live concert improvisation.
“GENIUS: GENerative Intelligence for Urban Sustainability,” presented by Norhan Bayomi, a postdoc at the MIT Environmental Solutions Initiative and a research assistant in the Urban Metabolism Group, which aims to address the critical gap of a standardized approach in evaluating and benchmarking cities’ climate policies.
Georgia Perakis, the John C Head III Dean (Interim) of the MIT Sloan School of Management and professor of operations management, operations research, and statistics, who serves as co-chair of the GenAI Dean’s oversight group with Dan Huttenlocher, dean of the MIT Schwarzman College of Computing, ended the event with closing remarks that emphasized “the readiness and eagerness of our community to lead in this space.”
“This is only the beginning,” she continued. “We are at the front edge of a historic moment — one where MIT has the opportunity, and the responsibility, to shape the future of generative AI with purpose, with excellence, and with care.”
Island rivers carve passageways through coral reefsResearch shows these channels allow seawater and nutrients to flow in and out, helping to maintain reef health over millions of years.Volcanic islands, such as the islands of Hawaii and the Caribbean, are surrounded by coral reefs that encircle an island in a labyrinthine, living ring. A coral reef is punctured at points by reef passes — wide channels that cut through the coral and serve as conduits for ocean water and nutrients to filter in and out. These watery passageways provide circulation throughout a reef, helping to maintain the health of corals by flushing out freshwater and transporting key nutrients.
Now, MIT scientists have found that reef passes are shaped by island rivers. In a study appearing today in the journal Geophysical Research Letters, the team shows that the locations of reef passes along coral reefs line up with where rivers funnel out from an island’s coast.
Their findings provide the first quantitative evidence of rivers forming reef passes. Scientists and explorers had speculated that this may be the case: Where a river on a volcanic island meets the coast, the freshwater and sediment it carries flows toward the reef, where a strong enough flow can tunnel into the surrounding coral. This idea has been proposed from time to time but never quantitatively tested, until now.
“The results of this study help us to understand how the health of coral reefs depends on the islands they surround,” says study author Taylor Perron, the Cecil and Ida Green Professor of Earth, Atmospheric and Planetary Sciences at MIT.
“A lot of discussion around rivers and their impact on reefs today has been negative because of human impact and the effects of agricultural practices,” adds lead author Megan Gillen, a graduate student in the MIT-WHOI Joint Program in Oceanography. “This study shows the potential long-term benefits rivers can have on reefs, which I hope reshapes the paradigm and highlights the natural state of rivers interacting with reefs.”
The study’s other co-author is Andrew Ashton of the Woods Hole Oceanographic Institution.
Drawing the lines
The new study is based on the team’s analysis of the Society Islands, a chain of islands in the South Pacific Ocean that includes Tahiti and Bora Bora. Gillen, who joined the MIT-WHOI program in 2020, was interested in exploring connections between coral reefs and the islands they surround. With limited options for on-site work during the Covid-19 pandemic, she and Perron looked to see what they could learn through satellite images and maps of island topography. They did a quick search using Google Earth and zeroed in on the Society Islands for their uniquely visible reef and island features.
“The islands in this chain have these iconic, beautiful reefs, and we kept noticing these reef passes that seemed to align with deeply embayed portions of the coastline,” Gillen says. “We started asking ourselves, is there a correlation here?”
Viewed from above, the coral reefs that circle some islands bear what look to be notches, like cracks that run straight through a ring. These breaks in the coral are reef passes — large channels that run tens of meters deep and can be wide enough for some boats to pass through. On first look, Gillen noticed that the most obvious reef passes seemed to line up with flooded river valleys — depressions in the coastline that have been eroded over time by island rivers that flow toward the ocean. She wondered whether and to what extent island rivers might shape reef passes.
“People have examined the flow through reef passes to understand how ocean waves and seawater circulate in and out of lagoons, but there have been no claims of how these passes are formed,” Gillen says. “Reef pass formation has been mentioned infrequently in the literature, and people haven’t explored it in depth.”
Reefs unraveled
To get a detailed view of the topography in and around the Society Islands, the team used data from the NASA Shuttle Radar Topography Mission — two radar antennae that flew aboard the space shuttle in 1999 and measured the topography across 80 percent of the Earth’s surface.
The researchers used the mission’s topographic data in the Society Islands to create a map of every drainage basin along the coast of each island, to get an idea of where major rivers flow or once flowed. They also marked the locations of every reef pass in the surrounding coral reefs. They then essentially “unraveled” each island’s coastline and reef into a straight line, and compared the locations of basins versus reef passes.
“Looking at the unwrapped shorelines, we find a significant correlation in the spatial relationship between these big river basins and where the passes line up,” Gillen says. “So we can say that statistically, the alignment of reef passes and large rivers does not seem random. The big rivers have a role in forming passes.”
As for how rivers shape the coral conduits, the team has two ideas, which they call, respectively, reef incision and reef encroachment. In reef incision, they propose that reef passes can form in times when the sea level is relatively low, such that the reef is exposed above the sea surface and a river can flow directly over the reef. The water and sediment carried by the river can then erode the coral, progressively carving a path through the reef.
When sea level is relatively higher, the team suspects a reef pass can still form, through reef encroachment. Coral reefs naturally live close to the water surface, where there is light and opportunity for photosynthesis. When sea levels rise, corals naturally grow upward and inward toward an island, to try to “catch up” to the water line.
“Reefs migrate toward the islands as sea levels rise, trying to keep pace with changing average sea level,” Gillen says.
However, part of the encroaching reef can end up in old river channels that were previously carved out by large rivers and that are lower than the rest of the island coastline. The corals in these river beds end up deeper than light can extend into the water column, and inevitably drown, leaving a gap in the form of a reef pass.
“We don’t think it’s an either/or situation,” Gillen says. “Reef incision occurs when sea levels fall, and reef encroachment happens when sea levels rise. Both mechanisms, occurring over dozens of cycles of sea-level rise and island evolution, are likely responsible for the formation and maintenance of reef passes over time.”
The team also looked to see whether there were differences in reef passes in older versus younger islands. They observed that younger islands were surrounded by more reef passes that were spaced closer together, versus older islands that had fewer reef passes that were farther apart.
As islands age, they subside, or sink, into the ocean, which reduces the amount of land that funnels rainwater into rivers. Eventually, rivers are too weak to keep the reef passes open, at which point, the ocean likely takes over, and incoming waves could act to close up some passes.
Gillen is exploring ideas for how rivers, or river-like flow, can be engineered to create paths through coral reefs in ways that would promote circulation and benefit reef health.
“Part of me wonders: If you had a more persistent flow, in places where you don’t naturally have rivers interacting with the reef, could that potentially be a way to increase health, by incorporating that river component back into the reef system?” Gillen says. “That’s something we’re thinking about.”
This research was supported, in part, by the WHOI Watson and Von Damm fellowships.
When Earth iced over, early life may have sheltered in meltwater pondsModern-day analogs in Antarctica reveal ponds teeming with life similar to early multicellular organisms.When the Earth froze over, where did life shelter? MIT scientists say one refuge may have been pools of melted ice that dotted the planet’s icy surface.
In a study appearing today in Nature Communications, the researchers report that 635 million to 720 million years ago, during periods known as “Snowball Earth,” when much of the planet was covered in ice, some of our ancient cellular ancestors could have waited things out in meltwater ponds.
The scientists found that eukaryotes — complex cellular lifeforms that eventually evolved into the diverse multicellular life we see today — could have survived the global freeze by living in shallow pools of water. These small, watery oases may have persisted atop relatively shallow ice sheets present in equatorial regions. There, the ice surface could accumulate dark-colored dust and debris from below, which enhanced its ability to melt into pools. At temperatures hovering around 0 degrees Celsius, the resulting meltwater ponds could have served as habitable environments for certain forms of early complex life.
The team drew its conclusions based on an analysis of modern-day meltwater ponds. Today in Antarctica, small pools of melted ice can be found along the margins of ice sheets. The conditions along these polar ice sheets are similar to what likely existed along ice sheets near the equator during Snowball Earth.
The researchers analyzed samples from a variety of meltwater ponds located on the McMurdo Ice Shelf in an area that was first described by members of Robert Falcon Scott's 1903 expedition as “dirty ice.” The MIT researchers discovered clear signatures of eukaryotic life in every pond. The communities of eukaryotes varied from pond to pond, revealing a surprising diversity of life across the setting. The team also found that salinity plays a key role in the kind of life a pond can host: Ponds that were more brackish or salty had more similar eukaryotic communities, which differed from those in ponds with fresher waters.
“We’ve shown that meltwater ponds are valid candidates for where early eukaryotes could have sheltered during these planet-wide glaciation events,” says lead author Fatima Husain, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “This shows us that diversity is present and possible in these sorts of settings. It’s really a story of life’s resilience.”
The study’s MIT co-authors include Schlumberger Professor of Geobiology Roger Summons and former postdoc Thomas Evans, along with Jasmin Millar of Cardiff University, Anne Jungblut at the Natural History Museum in London, and Ian Hawes of the University of Waikato in New Zealand.
Polar plunge
“Snowball Earth” is the colloquial term for periods of time in Earth history during which the planet iced over. It is often used as a reference to the two consecutive, multi-million-year glaciation events which took place during the Cryogenian Period, which geologists refer to as the time between 635 and 720 million years ago. Whether the Earth was more of a hardened snowball or a softer “slushball” is still up for debate. But scientists are certain of one thing: Most of the planet was plunged into a deep freeze, with average global temperatures of minus 50 degrees Celsius. The question has been: How and where did life survive?
“We’re interested in understanding the foundations of complex life on Earth. We see evidence for eukaryotes before and after the Cryogenian in the fossil record, but we largely lack direct evidence of where they may have lived during,” Husain says. “The great part of this mystery is, we know life survived. We’re just trying to understand how and where.”
There are a number of ideas for where organisms could have sheltered during Snowball Earth, including in certain patches of the open ocean (if such environments existed), in and around deep-sea hydrothermal vents, and under ice sheets. In considering meltwater ponds, Husain and her colleagues pursued the hypothesis that surface ice meltwaters may also have been capable of supporting early eukaryotic life at the time.
“There are many hypotheses for where life could have survived and sheltered during the Cryogenian, but we don’t have excellent analogs for all of them,” Husain notes. “Above-ice meltwater ponds occur on Earth today and are accessible, giving us the opportunity to really focus in on the eukaryotes which live in these environments.”
Small pond, big life
For their new study, the researchers analyzed samples taken from meltwater ponds in Antarctica. In 2018, Summons and colleagues from New Zealand traveled to a region of the McMurdo Ice Shelf in East Antarctica, known to host small ponds of melted ice, each just a few feet deep and a few meters wide. There, water freezes all the way to the seafloor, in the process trapping dark-colored sediments and marine organisms. Wind-driven loss of ice from the surface creates a sort of conveyer belt that brings this trapped debris to the surface over time, where it absorbs the sun’s warmth, causing ice to melt, while surrounding debris-free ice reflects incoming sunlight, resulting in the formation of shallow meltwater ponds.
The bottom of each pond is lined with mats of microbes that have built up over years to form layers of sticky cellular communities.
“These mats can be a few centimeters thick, colorful, and they can be very clearly layered,” Husain says.
These microbial mats are made up of cyanobacteria, prokaryotic, single-celled photosynthetic organisms that lack a cell nucleus or other organelles. While these ancient microbes are known to survive within some of the the harshest environments on Earth including meltwater ponds, the researchers wanted to know whether eukaryotes — complex organisms that evolved a cell nucleus and other membrane bound organelles — could also weather similarly challenging circumstances. Answering this question would take more than a microscope, as the defining characteristics of the microscopic eukaryotes present among the microbial mats are too subtle to distinguish by eye.
To characterize the eukaryotes, the team analyzed the mats for specific lipids they make called sterols, as well as genetic components called ribosomal ribonucleic acid (rRNA), both of which can be used to identify organisms with varying degrees of specificity. These two independent sets of analyses provided complementary fingerprints for certain eukaryotic groups. As part of the team’s lipid research, they found many sterols and rRNA genes closely associated with specific types of algae, protists, and microscopic animals among the microbial mats. The researchers were able to assess the types and relative abundance of lipids and rRNA genes from pond to pond, and found the ponds hosted a surprising diversity of eukaryotic life.
“No two ponds were alike,” Husain says. “There are repeating casts of characters, but they’re present in different abundances. And we found diverse assemblages of eukaryotes from all the major groups in all the ponds studied. These eukaryotes are the descendants of the eukaryotes that survived the Snowball Earth. This really highlights that meltwater ponds during Snowball Earth could have served as above-ice oases that nurtured the eukaryotic life that enabled the diversification and proliferation of complex life — including us — later on.”
This research was supported, in part, by the NASA Exobiology Program, the Simons Collaboration on the Origins of Life, and a MISTI grant from MIT-New Zealand.
QS ranks MIT the world’s No. 1 university for 2025-26Ranking at the top for the 14th year in a row, the Institute also places first in 11 subject areas.MIT has again been named the world’s top university by the QS World University Rankings, which were announced today. This is the 14th year in a row MIT has received this distinction.
The full 2026 edition of the rankings — published by Quacquarelli Symonds, an organization specializing in education and study abroad — can be found at TopUniversities.com. The QS rankings are based on factors including academic reputation, employer reputation, citations per faculty, student-to-faculty ratio, proportion of international faculty, and proportion of international students.
MIT was also ranked the world’s top university in 11 of the subject areas ranked by QS, as announced in March of this year.
The Institute received a No. 1 ranking in the following QS subject areas: Chemical Engineering; Civil and Structural Engineering; Computer Science and Information Systems; Data Science and Artificial Intelligence; Electrical and Electronic Engineering; Linguistics; Materials Science; Mechanical, Aeronautical, and Manufacturing Engineering; Mathematics; Physics and Astronomy; and Statistics and Operational Research.
MIT also placed second in seven subject areas: Accounting and Finance; Architecture/Built Environment; Biological Sciences; Business and Management Studies; Chemistry; Earth and Marine Sciences; and Economics and Econometrics.
The MIT Press acquires University Science Books from AIP PublishingThe textbook publisher will transfer to the MIT Press next month, in time for fall 2025 course adoptions.The MIT Press announces the acquisition of textbook publisher University Science Books from AIP Publishing, a subsidiary of the American Institute of Physics (AIP).
University Science Books was founded in 1978 to publish intermediate- and advanced-level science and reference books by respected authors, published with the highest design and production standards, and priced as affordably as possible. Over the years, USB’s authors have acquired international followings, and its textbooks in chemistry, physics, and astronomy have been recognized as the gold standard in their respective disciplines. USB was acquired by AIP Publishing in 2021.
Bestsellers include John Taylor’s “Classical Mechanics,” the No. 1 adopted text for undergrad mechanics courses in the United States and Canada, and his “Introduction to Error Analysis;” and Don McQuarrie’s “Physical Chemistry: A Molecular Approach” (commonly known as “Big Red”), the second-most adopted physical chemistry textbook in the U.S.
“We are so pleased to have found a new home for USB’s prestigious list of textbooks in the sciences,” says Alix Vance, CEO of AIP Publishing. “With its strong STEM focus, academic rigor, and high production standards, the MIT Press is the perfect partner to continue the publishing legacy of University Science Books.”
“This acquisition is both a brand and content fit for the MIT Press,” says Amy Brand, director and publisher of the MIT Press. “USB’s respected science list will complement our long-established publishing history of publishing foundational texts in computer science, finance, and economics.”
The MIT Press will take over the USB list as of July 1, with inventory transferring to Penguin Random House Publishing Services, the MIT Press’ sales and distribution partner.
For details regarding University Science Books titles, inventory, and how to order, please contact the MIT Press.
Established in 1962, The MIT Press is one of the largest and most distinguished university presses in the world and a leading publisher of books and journals at the intersection of science, technology, art, social science, and design.
AIP Publishing is a wholly owned not-for-profit subsidiary of the AIP and supports the charitable, scientific, and educational purposes of AIP through scholarly publishing activities on its behalf and on behalf of our publishing partners.
A sounding board for strengthening the student experienceComposed of “computing bilinguals,” the Undergraduate Advisory Group provides vital input to help advance the mission of the MIT Schwarzman College of Computing.During his first year at MIT in 2021, Matthew Caren ’25 received an intriguing email inviting students to apply to become members of the MIT Schwarzman College of Computing’s (SCC) Undergraduate Advisory Group (UAG). He immediately shot off an application.
Caren is a jazz musician who majored in computer science and engineering, and minored in music and theater arts. He was drawn to the college because of its focus on the applied intersections between computing, engineering, the arts, and other academic pursuits. Caren eagerly joined the UAG and stayed on it all four years at MIT.
First formed in April 2020, the group brings together a committee of around 25 undergraduate students representing a broad swath of both traditional and blended majors in electrical engineering and computer science (EECS) and other computing-related programs. They advise the college’s leadership on issues, offer constructive feedback, and serve as a sounding board for innovative new ideas.
“The ethos of the UAG is the ethos of the college itself,” Caren explains. “If you very intentionally bring together a bunch of smart, interesting, fun-to-be-around people who are all interested in completely diverse things, you'll get some really cool discussions and interactions out of it.”
Along the way, he’s also made “dear” friends and found true colleagues. In the group’s monthly meetings with SCC Dean Dan Huttenlocher and Deputy Dean Asu Ozdaglar, who is also the department head of EECS, UAG members speak openly about challenges in the student experience and offer recommendations to guests from across the Institute, such as faculty who are developing new courses and looking for student input.
“This group is unique in the sense that it’s a direct line of communication to the college’s leadership,” says Caren. “They make time in their insanely busy schedules for us to explain where the holes are, and what students’ needs are, directly from our experiences.”
“The students in the group are keenly interested in computer science and AI, especially how these fields connect with other disciplines. They’re also passionate about MIT and eager to enhance the undergraduate experience. Hearing their perspective is refreshing — their honesty and feedback have been incredibly helpful to me as dean,” says Huttenlocher.
“Meeting with the students each month is a real pleasure. The UAG has been an invaluable space for understanding the student experience more deeply. They engage with computing in diverse ways across MIT, so their input on the curriculum and broader college issues has been insightful,” Ozdaglar says.
UAG program manager Ellen Rushman says that “Asu and Dan have done an amazing job cultivating a space in which students feel safe bringing up things that aren’t positive all the time.” The group’s suggestions are frequently implemented, too.
For example, in 2021, Skidmore, Owings & Merrill, the architects designing the new SCC building, presented their renderings at a UAG meeting to request student feedback. Their original interiors layout offered very few of the hybrid study and meeting booths that are so popular in today’s first floor lobby.
Hearing strong UAG opinions about the sort of open-plan, community-building spaces that students really valued was one of the things that created the change to the current floor plan. “It’s super cool walking into the personalized space and seeing it constantly being in use and always crowded. I actually feel happy when I can’t get a table,” says Caren, who has just ended his tenure as co-chair of the group in preparation for graduation.
Caren’s co-chair, rising senior Julia Schneider, who is double-majoring in artificial intelligence and decision-making and mathematics, joined the UAG as a first-year to understand more about the college’s mission of fostering interdepartmental collaborations.
“Since I am a student in electrical engineering and computer science, but I conduct research in mechanical engineering on robotics, the college’s mission of fostering interdepartmental collaborations and uniting them through computing really spoke to my personal experiences in my first year at MIT,” Schneider says.
During her time on the UAG, members have joined subgroups focused around achieving different programmatic goals of the college, such as curating a public lecture series for the 2025-26 academic year to give MIT students exposure to faculty who conduct research in other disciplines that relate to computing.
At one meeting, after hearing how challenging it is for students to understand all the possible courses to take during their tenure, Schneider and some UAG peers formed a subgroup to find a solution.
The students agreed that some of the best courses they’ve taken at MIT, or pairings of courses that really struck a chord with their interdisciplinary interests, came because they spoke to upperclassmen and got recommendations. “This kind of tribal knowledge doesn’t really permeate to all of MIT,” Schneider explains.
For the last six months, Schneider and the subgroup have been working on a course visualization website, NerdXing, which came out of these discussions.
Guided by Rob Miller, Distinguished Professor of Computer Science in EECS, the subgroup used a dataset of EECS course enrollments over the past decade to develop a different type of tool than MIT students typically use, such as CourseRoad and others.
Miller, who regularly attends the UAG meetings in his role as the education officer for the college’s cross-cutting initiative, Common Ground for Computing Education, comments, “the really cool idea here is to help students find paths that were taken by other people who are like them — not just interested in computer science, but maybe also in biology, or music, or economics, or neuroscience. It's very much in the spirit of the College of Computing — applying data-driven computational methods, in support of students with wide-ranging computational interests.”
Opening the NerdXing pilot, Schneider gave a demo. She explains that if you are a computer science (CS) major and would like to create a visual presenting potential courses for you, after you select your major and a class of interest, you can expand a huge graph presenting all the possible courses your CS peers have taken over the past decade.
She clicked on class 18.404 (Theory of Computation) as the starting class of interest, which led to class 6.7900 (Machine Learning), and then unexpectedly to 21M.302 (Harmony and Counterpoint II), an advanced music class.
“You start to see aggregate statistics that tell you how many students took each course, and you can further pare it down to see the most popular courses in CS or follow lines of red dots between courses to see the typical sequence of classes taken.”
By getting granular on the graph, users begin to see classes that they have probably never heard anyone talking about in their program. “I think that one of the reasons you come to MIT is to be able to take cool stuff exactly like this,” says Schneider.
The tool aims to show students how they can choose classes that go far beyond just filling degree requirements. It’s just one example of how UAG is empowering students to strengthen the college and the experiences it offers them.
“We are MIT students. We have the skills to build solutions,” Schneider says. “This group of people not only brings up ways in which things could be better, but we take it into our own hands to fix things.”
Closing in on superconducting semiconductorsPlasma Science and Fusion Center researchers created a superconducting circuit that could one day replace semiconductor components in quantum and high-performance computing systems.In 2023, about 4.4 percent (176 terawatt-hours) of total energy consumption in the United States was by data centers that are essential for processing large quantities of information. Of that 176 TWh, approximately 100 TWh (57 percent) was used by CPU and GPU equipment. Energy requirements have escalated substantially in the past decade and will only continue to grow, making the development of energy-efficient computing crucial.
Superconducting electronics have arisen as a promising alternative for classical and quantum computing, although their full exploitation for high-end computing requires a dramatic reduction in the amount of wiring linking ambient temperature electronics and low-temperature superconducting circuits. To make systems that are both larger and more streamlined, replacing commonplace components such as semiconductors with superconducting versions could be of immense value. It’s a challenge that has captivated MIT Plasma Science and Fusion Center senior research scientist Jagadeesh Moodera and his colleagues, who described a significant breakthrough in a recent Nature Electronics paper, “Efficient superconducting diodes and rectifiers for quantum circuitry.”
Moodera was working on a stubborn problem. One of the critical long-standing requirements is the need for the efficient conversion of AC currents into DC currents on a chip while operating at the extremely cold cryogenic temperatures required for superconductors to work efficiently. For example, in superconducting “energy-efficient rapid single flux quantum” (ERSFQ) circuits, the AC-to-DC issue is limiting ERSFQ scalability and preventing their use in larger circuits with higher complexities. To respond to this need, Moodera and his team created superconducting diode (SD)-based superconducting rectifiers — devices that can convert AC to DC on the same chip. These rectifiers would allow for the efficient delivery of the DC current necessary to operate superconducting classical and quantum processors.
Quantum computer circuits can only operate at temperatures close to 0 kelvins (absolute zero), and the way power is supplied must be carefully controlled to limit the effects of interference introduced by too much heat or electromagnetic noise. Most unwanted noise and heat come from the wires connecting cold quantum chips to room-temperature electronics. Instead, using superconducting rectifiers to convert AC currents into DC within a cryogenic environment reduces the number of wires, cutting down on heat and noise and enabling larger, more stable quantum systems.
In a 2023 experiment, Moodera and his co-authors developed SDs that are made of very thin layers of superconducting material that display nonreciprocal (or unidirectional) flow of current and could be the superconducting counterpart to standard semiconductors. Even though SDs have garnered significant attention, especially since 2020, up until this point the research has focused only on individual SDs for proof of concept. The group’s 2023 paper outlined how they created and refined a method by which SDs could be scaled for broader application.
Now, by building a diode bridge circuit, they demonstrated the successful integration of four SDs and realized AC-to-DC rectification at cryogenic temperatures.
The new approach described in their recent Nature Electronics paper will significantly cut down on the thermal and electromagnetic noise traveling from ambient into cryogenic circuitry, enabling cleaner operation. The SDs could also potentially serve as isolators/circulators, assisting in insulating qubit signals from external influence. The successful assimilation of multiple SDs into the first integrated SD circuit represents a key step toward making superconducting computing a commercial reality.
“Our work opens the door to the arrival of highly energy-efficient, practical superconductivity-based supercomputers in the next few years,” says Moodera. “Moreover, we expect our research to enhance the qubit stability while boosting the quantum computing program, bringing its realization closer." Given the multiple beneficial roles these components could play, Moodera and his team are already working toward the integration of such devices into actual superconducting logic circuits, including in dark matter detection circuits that are essential to the operation of experiments at CERN and LUX-ZEPLIN in at the Berkeley National Lab.
This work was partially funded by MIT Lincoln Laboratory’s Advanced Concepts Committee, the U.S. National Science Foundation, U.S. Army Research Office, and U.S. Air Force Office of Scientific Research.
This work was carried out, in part, through the use of MIT.nano’s facilities.
After more than a decade of successes, ESI’s work will spread out across the InstituteJohn Fernandez will step down as head of the Environmental Solutions Initiative, as its components will become part of the Climate Project and other entities.MIT’s Environmental Solutions Initiative (ESI), a pioneering cross-disciplinary body that helped give a major boost to sustainability and solutions to climate change at MIT, will close as a separate entity at the end of June. But that’s far from the end for its wide-ranging work, which will go forward under different auspices. Many of its key functions will become part of MIT’s recently launched Climate Project. John Fernandez, head of ESI for nearly a decade, will return to the School of Architecture and Planning, where some of ESI’s important work will continue as part of a new interdisciplinary lab.
When the ideas that led to the founding of MIT’s Environmental Solutions Initiative first began to be discussed, its founders recall, there was already a great deal of work happening at MIT relating to climate change and sustainability. As Professor John Sterman of the MIT Sloan School of Management puts it, “there was a lot going on, but it wasn’t integrated. So the whole added up to less than the sum of its parts.”
ESI was founded in 2014 to help fill that coordinating role, and in the years since it has accomplished a wide range of significant milestones in research, education, and communication about sustainable solutions in a wide range of areas. Its founding director, Professor Susan Solomon, helmed it for its first year, and then handed the leadership to Fernandez, who has led it since 2015.
“There wasn’t much of an ecosystem [on sustainability] back then,” Solomon recalls. But with the help of ESI and some other entities, that ecosystem has blossomed. She says that Fernandez “has nurtured some incredible things under ESI,” including work on nature-based climate solutions, and also other areas such as sustainable mining, and reduction of plastics in the environment.
Desiree Plata, director of MIT’s Climate and Sustainability Consortium and associate professor of civil and environmental engineering, says that one key achievement of the initiative has been in “communication with the external world, to help take really complex systems and topics and put them in not just plain-speak, but something that’s scientifically rigorous and defensible, for the outside world to consume.”
In particular, ESI has created three very successful products, which continue under the auspices of the Climate Project. These include the popular TIL Climate Podcast, the Webby Award-winning Climate Portal website, and the online climate primer developed with Professor Kerry Emanuel. “These are some of the most frequented websites at MIT,” Plata says, and “the impact of this work on the global knowledge base cannot be overstated.”
Fernandez says that ESI has played a significant part in helping to catalyze what has become “a rich institutional landscape of work in sustainability and climate change” at MIT. He emphasizes three major areas where he feels the ESI has been able to have the most impact: engaging the MIT community, initiating and stewarding critical environmental research, and catalyzing efforts to promote sustainability as fundamental to the mission of a research university.
Engagement of the MIT community, he says, began with two programs: a research seed grant program and the creation of MIT’s undergraduate minor in environment and sustainability, launched in 2017.
ESI also created a Rapid Response Group, which gave students a chance to work on real-world projects with external partners, including government agencies, community groups, nongovernmental organizations, and businesses. In the process, they often learned why dealing with environmental challenges in the real world takes so much longer than they might have thought, he says, and that a challenge that “seemed fairly straightforward at the outset turned out to be more complex and nuanced than expected.”
The second major area, initiating and stewarding environmental research, grew into a set of six specific program areas: natural climate solutions, mining, cities and climate change, plastics and the environment, arts and climate, and climate justice.
These efforts included collaborations with a Nobel Peace Prize laureate, three successive presidential administrations from Colombia, and members of communities affected by climate change, including coal miners, indigenous groups, various cities, companies, the U.N., many agencies — and the popular musical group Coldplay, which has pledged to work toward climate neutrality for its performances. “It was the role that the ESI played as a host and steward of these research programs that may serve as a key element of our legacy,” Fernandez says.
The third broad area, he says, “is the idea that the ESI as an entity at MIT would catalyze this movement of a research university toward sustainability as a core priority.” While MIT was founded to be an academic partner to the industrialization of the world, “aren’t we in a different world now? The kind of massive infrastructure planning and investment and construction that needs to happen to decarbonize the energy system is maybe the largest industrialization effort ever undertaken. Even more than in the recent past, the set of priorities driving this have to do with sustainable development.”
Overall, Fernandez says, “we did everything we could to infuse the Institute in its teaching and research activities with the idea that the world is now in dire need of sustainable solutions.”
Fernandez “has nurtured some incredible things under ESI,” Solomon says. “It’s been a very strong and useful program, both for education and research.” But it is appropriate at this time to distribute its projects to other venues, she says. “We do now have a major thrust in the Climate Project, and you don’t want to have redundancies and overlaps between the two.”
Fernandez says “one of the missions of the Climate Project is really acting to coalesce and aggregate lots of work around MIT.” Now, with the Climate Project itself, along with the Climate Policy Center and the Center for Sustainability Science and Strategy, it makes more sense for ESI’s climate-related projects to be integrated into these new entities, and other projects that are less directly connected to climate to take their places in various appropriate departments or labs, he says.
“We did enough with ESI that we made it possible for these other centers to really flourish,” he says. “And in that sense, we played our role.”
As of June 1, Fernandez has returned to his role as professor of architecture and urbanism and building technology in the School of Architecture and Planning, where he directs the Urban Metabolism Group. He will also be starting up a new group called Environment ResearchAction (ERA) to continue ESI work in cities, nature, and artificial intelligence.
Decarbonizing steel is as tough as steelBut a new study shows how advanced steelmaking technologies could substantially reduce carbon emissions.The long-term aspirational goal of the Paris Agreement on climate change is to cap global warming at 1.5 degrees Celsius above preindustrial levels, and thereby reduce the frequency and severity of floods, droughts, wildfires, and other extreme weather events. Achieving that goal will require a massive reduction in global carbon dioxide (CO2) emissions across all economic sectors. A major roadblock, however, could be the industrial sector, which accounts for roughly 25 percent of global energy- and process-related CO2 emissions — particularly within the iron and steel sector, industry’s largest emitter of CO2.
Iron and steel production now relies heavily on fossil fuels (coal or natural gas) for heat, converting iron ore to iron, and making steel strong. Steelmaking could be decarbonized by a combination of several methods, including carbon capture technology, the use of low- or zero-carbon fuels, and increased use of recycled steel. Now a new study in the Journal of Cleaner Production systematically explores the viability of different iron-and-steel decarbonization strategies.
Today’s strategy menu includes improving energy efficiency, switching fuels and technologies, using more scrap steel, and reducing demand. Using the MIT Economic Projection and Policy Analysis model, a multi-sector, multi-region model of the world economy, researchers at MIT, the University of Illinois at Urbana-Champaign, and ExxonMobil Technology and Engineering Co. evaluate the decarbonization potential of replacing coal-based production processes with electric arc furnaces (EAF), along with either scrap steel or “direct reduced iron” (DRI), which is fueled by natural gas with carbon capture and storage (NG CCS DRI-EAF) or by hydrogen (H2 DRI-EAF).
Under a global climate mitigation scenario aligned with the 1.5 C climate goal, these advanced steelmaking technologies could result in deep decarbonization of the iron and steel sector by 2050, as long as technology costs are low enough to enable large-scale deployment. Higher costs would favor the replacement of coal with electricity and natural gas, greater use of scrap steel, and reduced demand, resulting in a more-than-50-percent reduction in emissions relative to current levels. Lower technology costs would enable massive deployment of NG CCS DRI-EAF or H2 DRI-EAF, reducing emissions by up to 75 percent.
Even without adoption of these advanced technologies, the iron-and-steel sector could significantly reduce its CO2 emissions intensity (how much CO2 is released per unit of production) with existing steelmaking technologies, primarily by replacing coal with gas and electricity (especially if it is generated by renewable energy sources), using more scrap steel, and implementing energy efficiency measures.
“The iron and steel industry needs to combine several strategies to substantially reduce its emissions by mid-century, including an increase in recycling, but investing in cost reductions in hydrogen pathways and carbon capture and sequestration will enable even deeper emissions mitigation in the sector,” says study supervising author Sergey Paltsev, deputy director of the MIT Center for Sustainability Science and Strategy (MIT CS3) and a senior research scientist at the MIT Energy Initiative (MITEI).
This study was supported by MIT CS3 and ExxonMobil through its membership in MITEI.
Bringing meaning into technology deploymentThe MIT Ethics of Computing Research Symposium showcases projects at the intersection of technology, ethics, and social responsibility.In 15 TED Talk-style presentations, MIT faculty recently discussed their pioneering research that incorporates social, ethical, and technical considerations and expertise, each supported by seed grants established by the Social and Ethical Responsibilities of Computing (SERC), a cross-cutting initiative of the MIT Schwarzman College of Computing. The call for proposals last summer was met with nearly 70 applications. A committee with representatives from every MIT school and the college convened to select the winning projects that received up to $100,000 in funding.
“SERC is committed to driving progress at the intersection of computing, ethics, and society. The seed grants are designed to ignite bold, creative thinking around the complex challenges and possibilities in this space,” said Nikos Trichakis, co-associate dean of SERC and the J.C. Penney Professor of Management. “With the MIT Ethics of Computing Research Symposium, we felt it important to not just showcase the breadth and depth of the research that’s shaping the future of ethical computing, but to invite the community to be part of the conversation as well.”
“What you’re seeing here is kind of a collective community judgment about the most exciting work when it comes to research, in the social and ethical responsibilities of computing being done at MIT,” said Caspar Hare, co-associate dean of SERC and professor of philosophy.
The full-day symposium on May 1 was organized around four key themes: responsible health-care technology, artificial intelligence governance and ethics, technology in society and civic engagement, and digital inclusion and social justice. Speakers delivered thought-provoking presentations on a broad range of topics, including algorithmic bias, data privacy, the social implications of artificial intelligence, and the evolving relationship between humans and machines. The event also featured a poster session, where student researchers showcased projects they worked on throughout the year as SERC Scholars.
Highlights from the MIT Ethics of Computing Research Symposium in each of the theme areas, many of which are available to watch on YouTube, included:
Making the kidney transplant system fairer
Policies regulating the organ transplant system in the United States are made by a national committee that often takes more than six months to create, and then years to implement, a timeline that many on the waiting list simply can’t survive.
Dimitris Bertsimas, vice provost for open learning, associate dean of business analytics, and Boeing Professor of Operations Research, shared his latest work in analytics for fair and efficient kidney transplant allocation. Bertsimas’ new algorithm examines criteria like geographic location, mortality, and age in just 14 seconds, a monumental change from the usual six hours.
Bertsimas and his team work closely with the United Network for Organ Sharing (UNOS), a nonprofit that manages most of the national donation and transplant system through a contract with the federal government. During his presentation, Bertsimas shared a video from James Alcorn, senior policy strategist at UNOS, who offered this poignant summary of the impact the new algorithm has:
“This optimization radically changes the turnaround time for evaluating these different simulations of policy scenarios. It used to take us a couple months to look at a handful of different policy scenarios, and now it takes a matter of minutes to look at thousands and thousands of scenarios. We are able to make these changes much more rapidly, which ultimately means that we can improve the system for transplant candidates much more rapidly.”
The ethics of AI-generated social media content
As AI-generated content becomes more prevalent across social media platforms, what are the implications of disclosing (or not disclosing) that any part of a post was created by AI? Adam Berinsky, Mitsui Professor of Political Science, and Gabrielle Péloquin-Skulski, PhD student in the Department of Political Science, explored this question in a session that examined recent studies on the impact of various labels on AI-generated content.
In a series of surveys and experiments affixing labels to AI-generated posts, the researchers looked at how specific words and descriptions impacted users’ perception of deception, their intent to engage with the post, and ultimately if the post was true or false.
“The big takeaway from our initial set of findings is that one size doesn’t fit all,” said Péloquin-Skulski. “We found that labeling AI-generated images with a process-oriented label reduces belief in both false and true posts. This is quite problematic, as labeling intends to reduce people’s belief in false information, not necessarily true information. This suggests that labels combining both process and veracity might be better at countering AI-generated misinformation.”
Using AI to increase civil discourse online
“Our research aims to address how people increasingly want to have a say in the organizations and communities they belong to,” Lily Tsai explained in a session on experiments in generative AI and the future of digital democracy. Tsai, Ford Professor of Political Science and director of the MIT Governance Lab, is conducting ongoing research with Alex Pentland, Toshiba Professor of Media Arts arts Sciences, and a larger team.
Online deliberative platforms have recently been rising in popularity across the United States in both public- and private-sector settings. Tsai explained that with technology, it’s now possible for everyone to have a say — but doing so can be overwhelming, or even feel unsafe. First, too much information is available, and secondly, online discourse has become increasingly “uncivil.”
The group focuses on “how we can build on existing technologies and improve them with rigorous, interdisciplinary research, and how we can innovate by integrating generative AI to enhance the benefits of online spaces for deliberation.” They have developed their own AI-integrated platform for deliberative democracy, DELiberation.io, and rolled out four initial modules. All studies have been in the lab so far, but they are also working on a set of forthcoming field studies, the first of which will be in partnership with the government of the District of Columbia.
Tsai told the audience, “If you take nothing else from this presentation, I hope that you’ll take away this — that we should all be demanding that technologies that are being developed are assessed to see if they have positive downstream outcomes, rather than just focusing on maximizing the number of users.”
A public think tank that considers all aspects of AI
When Catherine D’Ignazio, associate professor of urban science and planning, and Nikko Stevens, postdoc at the Data + Feminism Lab at MIT, initially submitted their funding proposal, they weren’t intending to develop a think tank, but a framework — one that articulated how artificial intelligence and machine learning work could integrate community methods and utilize participatory design.
In the end, they created Liberatory AI, which they describe as a “rolling public think tank about all aspects of AI.” D’Ignazio and Stevens gathered 25 researchers from a diverse array of institutions and disciplines who authored more than 20 position papers examining the most current academic literature on AI systems and engagement. They intentionally grouped the papers into three distinct themes: the corporate AI landscape, dead ends, and ways forward.
“Instead of waiting for Open AI or Google to invite us to participate in the development of their products, we’ve come together to contest the status quo, think bigger-picture, and reorganize resources in this system in hopes of a larger societal transformation,” said D’Ignazio.
How the brain solves complicated problemsStudy shows humans flexibly deploy different reasoning strategies to tackle challenging mental tasks — offering insights for building machines that think more like us.The human brain is very good at solving complicated problems. One reason for that is that humans can break problems apart into manageable subtasks that are easy to solve one at a time.
This allows us to complete a daily task like going out for coffee by breaking it into steps: getting out of our office building, navigating to the coffee shop, and once there, obtaining the coffee. This strategy helps us to handle obstacles easily. For example, if the elevator is broken, we can revise how we get out of the building without changing the other steps.
While there is a great deal of behavioral evidence demonstrating humans’ skill at these complicated tasks, it has been difficult to devise experimental scenarios that allow precise characterization of the computational strategies we use to solve problems.
In a new study, MIT researchers have successfully modeled how people deploy different decision-making strategies to solve a complicated task — in this case, predicting how a ball will travel through a maze when the ball is hidden from view. The human brain cannot perform this task perfectly because it is impossible to track all of the possible trajectories in parallel, but the researchers found that people can perform reasonably well by flexibly adopting two strategies known as hierarchical reasoning and counterfactual reasoning.
The researchers were also able to determine the circumstances under which people choose each of those strategies.
“What humans are capable of doing is to break down the maze into subsections, and then solve each step using relatively simple algorithms. Effectively, when we don’t have the means to solve a complex problem, we manage by using simpler heuristics that get the job done,” says Mehrdad Jazayeri, a professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, an investigator at the Howard Hughes Medical Institute, and the senior author of the study.
Mahdi Ramadan PhD ’24 and graduate student Cheng Tang are the lead authors of the paper, which appears today in Nature Human Behavior. Nicholas Watters PhD ’25 is also a co-author.
Rational strategies
When humans perform simple tasks that have a clear correct answer, such as categorizing objects, they perform extremely well. When tasks become more complex, such as planning a trip to your favorite cafe, there may no longer be one clearly superior answer. And, at each step, there are many things that could go wrong. In these cases, humans are very good at working out a solution that will get the task done, even though it may not be the optimal solution.
Those solutions often involve problem-solving shortcuts, or heuristics. Two prominent heuristics humans commonly rely on are hierarchical and counterfactual reasoning. Hierarchical reasoning is the process of breaking down a problem into layers, starting from the general and proceeding toward specifics. Counterfactual reasoning involves imagining what would have happened if you had made a different choice. While these strategies are well-known, scientists don’t know much about how the brain decides which one to use in a given situation.
“This is really a big question in cognitive science: How do we problem-solve in a suboptimal way, by coming up with clever heuristics that we chain together in a way that ends up getting us closer and closer until we solve the problem?” Jazayeri says.
To overcome this, Jazayeri and his colleagues devised a task that is just complex enough to require these strategies, yet simple enough that the outcomes and the calculations that go into them can be measured.
The task requires participants to predict the path of a ball as it moves through four possible trajectories in a maze. Once the ball enters the maze, people cannot see which path it travels. At two junctions in the maze, they hear an auditory cue when the ball reaches that point. Predicting the ball’s path is a task that is impossible for humans to solve with perfect accuracy.
“It requires four parallel simulations in your mind, and no human can do that. It’s analogous to having four conversations at a time,” Jazayeri says. “The task allows us to tap into this set of algorithms that the humans use, because you just can’t solve it optimally.”
The researchers recruited about 150 human volunteers to participate in the study. Before each subject began the ball-tracking task, the researchers evaluated how accurately they could estimate timespans of several hundred milliseconds, about the length of time it takes the ball to travel along one arm of the maze.
For each participant, the researchers created computational models that could predict the patterns of errors that would be seen for that participant (based on their timing skill) if they were running parallel simulations, using hierarchical reasoning alone, counterfactual reasoning alone, or combinations of the two reasoning strategies.
The researchers compared the subjects’ performance with the models’ predictions and found that for every subject, their performance was most closely associated with a model that used hierarchical reasoning but sometimes switched to counterfactual reasoning.
That suggests that instead of tracking all the possible paths that the ball could take, people broke up the task. First, they picked the direction (left or right), in which they thought the ball turned at the first junction, and continued to track the ball as it headed for the next turn. If the timing of the next sound they heard wasn’t compatible with the path they had chosen, they would go back and revise their first prediction — but only some of the time.
Switching back to the other side, which represents a shift to counterfactual reasoning, requires people to review their memory of the tones that they heard. However, it turns out that these memories are not always reliable, and the researchers found that people decided whether to go back or not based on how good they believed their memory to be.
“People rely on counterfactuals to the degree that it’s helpful,” Jazayeri says. “People who take a big performance loss when they do counterfactuals avoid doing them. But if you are someone who’s really good at retrieving information from the recent past, you may go back to the other side.”
Human limitations
To further validate their results, the researchers created a machine-learning neural network and trained it to complete the task. A machine-learning model trained on this task will track the ball’s path accurately and make the correct prediction every time, unless the researchers impose limitations on its performance.
When the researchers added cognitive limitations similar to those faced by humans, they found that the model altered its strategies. When they eliminated the model’s ability to follow all possible trajectories, it began to employ hierarchical and counterfactual strategies like humans do. If the researchers reduced the model’s memory recall ability, it began to switch to counterfactual only if it thought its recall would be good enough to get the right answer — just as humans do.
“What we found is that networks mimic human behavior when we impose on them those computational constraints that we found in human behavior,” Jazayeri says. “This is really saying that humans are acting rationally under the constraints that they have to function under.”
By slightly varying the amount of memory impairment programmed into the models, the researchers also saw hints that the switching of strategies appears to happen gradually, rather than at a distinct cut-off point. They are now performing further studies to try to determine what is happening in the brain as these shifts in strategy occur.
The research was funded by a Lisa K. Yang ICoN Fellowship, a Friends of the McGovern Institute Student Fellowship, a National Science Foundation Graduate Research Fellowship, the Simons Foundation, the Howard Hughes Medical Institute, and the McGovern Institute.
“Each of us holds a piece of the solution” Campus gathers with Vice President for Energy and Climate Evelyn Wang to explore the Climate Project at MIT, make connections, and exchange ideas.MIT has an unparalleled history of bringing together interdisciplinary teams to solve pressing problems — think of the development of radar during World War II, or leading the international coalition that cracked the code of the human genome — but the challenge of climate change could demand a scale of collaboration unlike any that’s come before at MIT.
“Solving climate change is not just about new technologies or better models. It’s about forging new partnerships across campus and beyond — between scientists and economists, between architects and data scientists, between policymakers and physicists, between anthropologists and engineers, and more,” MIT Vice President for Energy and Climate Evelyn Wang told an energetic crowd of faculty, students, and staff on May 6. “Each of us holds a piece of the solution — but only together can we see the whole.”
Undeterred by heavy rain, approximately 300 campus community members filled the atrium in the Tina and Hamid Moghadam Building (Building 55) for a spring gathering hosted by Wang and the Climate Project at MIT. The initiative seeks to direct the full strength of MIT to address climate change, which Wang described as one of the defining challenges of this moment in history — and one of its greatest opportunities.
“It calls on us to rethink how we power our world, how we build, how we live — and how we work together,” Wang said. “And there is no better place than MIT to lead this kind of bold, integrated effort. Our culture of curiosity, rigor, and relentless experimentation makes us uniquely suited to cross boundaries — to break down silos and build something new.”
The Climate Project is organized around six missions, thematic areas in which MIT aims to make significant impact, ranging from decarbonizing industry to new policy approaches to designing resilient cities. The faculty leaders of these missions posed challenges to the crowd before circulating among the crowd to share their perspectives and to discuss community questions and ideas.
Wang and the Climate Project team were joined by a number of research groups, startups, and MIT offices conducting relevant work today on issues related to energy and climate. For example, the MIT Office of Sustainability showcased efforts to use the MIT campus as a living laboratory; MIT spinouts such as Forma Systems, which is developing high-performance, low-carbon building systems, and Addis Energy, which envisions using the earth as a reactor to produce clean ammonia, presented their technologies; and visitors learned about current projects in MIT labs, including DebunkBot, an artificial intelligence-powered chatbot that can persuade people to shift their attitudes about conspiracies, developed by David Rand, the Erwin H. Schell Professor at the MIT Sloan School of Management.
Benedetto Marelli, an associate professor in the Department of Civil and Environmental Engineering who leads the Wild Cards Mission, said the energy and enthusiasm that filled the room was inspiring — but that the individual conversations were equally valuable.
“I was especially pleased to see so many students come out. I also spoke with other faculty, talked to staff from across the Institute, and met representatives of external companies interested in collaborating with MIT,” Marelli said. “You could see connections being made all around the room, which is exactly what we need as we build momentum for the Climate Project.”
Universal nanosensor unlocks the secrets to plant growth Researchers from SMART DiSTAP developed the world’s first near-infrared fluorescent nanosensor capable of monitoring a plant’s primary growth hormone in real-time and without harming the plant.Researchers from the Disruptive and Sustainable Technologies for Agricultural Precision (DiSTAP) interdisciplinary research group within the Singapore-MIT Alliance for Research and Technology have developed the world’s first near-infrared fluorescent nanosensor capable of real-time, nondestructive, and species-agnostic detection of indole-3-acetic acid (IAA) — the primary bioactive auxin hormone that controls the way plants develop, grow, and respond to stress.
Auxins, particularly IAA, play a central role in regulating key plant processes such as cell division, elongation, root and shoot development, and response to environmental cues like light, heat, and drought. External factors like light affect how auxin moves within the plant, temperature influences how much is produced, and a lack of water can disrupt hormone balance. When plants cannot effectively regulate auxins, they may not grow well, adapt to changing conditions, or produce as much food.
Existing IAA detection methods, such as liquid chromatography, require taking plant samples from the plant — which harms or removes part of it. Conventional methods also measure the effects of IAA rather than detecting it directly, and cannot be used universally across different plant types. In addition, since IAA are small molecules that cannot be easily tracked in real time, biosensors that contain fluorescent proteins need to be inserted into the plant’s genome to measure auxin, making it emit a fluorescent signal for live imaging.
SMART’s newly developed nanosensor enables direct, real-time tracking of auxin levels in living plants with high precision. The sensor uses near infrared imaging to monitor IAA fluctuations non-invasively across tissues like leaves, roots, and cotyledons, and it is capable of bypassing chlorophyll interference to ensure highly reliable readings even in densely pigmented tissues. The technology does not require genetic modification and can be integrated with existing agricultural systems — offering a scalable precision tool to advance both crop optimization and fundamental plant physiology research.
By providing real-time, precise measurements of auxin, the sensor empowers farmers with earlier and more accurate insights into plant health. With these insights and comprehensive data, farmers can make smarter, data-driven decisions on irrigation, nutrient delivery, and pruning, tailored to the plant’s actual needs — ultimately improving crop growth, boosting stress resilience, and increasing yields.
“We need new technologies to address the problems of food insecurity and climate change worldwide. Auxin is a central growth signal within living plants, and this work gives us a way to tap it to give new information to farmers and researchers,” says Michael Strano, co-lead principal investigator at DiSTAP, Carbon P. Dubbs Professor of Chemical Engineering at MIT, and co-corresponding author of the paper. “The applications are many, including early detection of plant stress, allowing for timely interventions to safeguard crops. For urban and indoor farms, where light, water, and nutrients are already tightly controlled, this sensor can be a valuable tool in fine-tuning growth conditions with even greater precision to optimize yield and sustainability.”
The research team documented the nanosensor’s development in a paper titled, “A Near-Infrared Fluorescent Nanosensor for Direct and Real-Time Measurement of Indole-3-Acetic Acid in Plants,” published in the journal ACS Nano. The sensor comprises single-walled carbon nanotubes wrapped in a specially designed polymer, which enables it to detect IAA through changes in near infrared fluorescence intensity. Successfully tested across multiple species, including Arabidopsis, Nicotiana benthamiana, choy sum, and spinach, the nanosensor can map IAA responses under various environmental conditions such as shade, low light, and heat stress.
“This sensor builds on DiSTAP’s ongoing work in nanotechnology and the CoPhMoRe technique, which has already been used to develop other sensors that can detect important plant compounds such as gibberellins and hydrogen peroxide. By adapting this approach for IAA, we’re adding to our inventory of novel, precise, and nondestructive tools for monitoring plant health. Eventually, these sensors can be multiplexed, or combined, to monitor a spectrum of plant growth markers for more complete insights into plant physiology,” says Duc Thinh Khong, research scientist at DiSTAP and co-first author of the paper.
“This small but mighty nanosensor tackles a long-standing challenge in agriculture: the need for a universal, real-time, and noninvasive tool to monitor plant health across various species. Our collaborative achievement not only empowers researchers and farmers to optimize growth conditions and improve crop yield and resilience, but also advances our scientific understanding of hormone pathways and plant-environment interactions,” says In-Cheol Jang, senior principal investigator at TLL, principal investigator at DiSTAP, and co-corresponding author of the paper.
Looking ahead, the research team is looking to combine multiple sensing platforms to simultaneously detect IAA and its related metabolites to create a comprehensive hormone signaling profile, offering deeper insights into plant stress responses and enhancing precision agriculture. They are also working on using microneedles for highly localized, tissue-specific sensing, and collaborating with industrial urban farming partners to translate the technology into practical, field-ready solutions.
The research was carried out by SMART, and supported by the National Research Foundation of Singapore under its Campus for Research Excellence And Technological Enterprise program. The universal nanosensor was developed in collaboration with Temasek Life Sciences Laboratory (TLL) and MIT.
How the brain distinguishes between ambiguous hypothesesNeural activity patterns can encode competing hypotheses about which landmark will lead to the correct destination.When navigating a place that we’re only somewhat familiar with, we often rely on unique landmarks to help make our way. However, if we’re looking for an office in a brick building, and there are many brick buildings along our route, we might use a rule like looking for the second building on a street, rather than relying on distinguishing the building itself.
Until that ambiguity is resolved, we must hold in mind that there are multiple possibilities (or hypotheses) for where we are in relation to our destination. In a study of mice, MIT neuroscientists have now discovered that these hypotheses are explicitly represented in the brain by distinct neural activity patterns.
This is the first time that neural activity patterns that encode simultaneous hypotheses have been seen in the brain. The researchers found that these representations, which were observed in the brain’s retrosplenial cortex (RSC), not only encode hypotheses but also could be used by the animals to choose the correct way to go.
“As far as we know, no one has shown in a complex reasoning task that there’s an area in association cortex that holds two hypotheses in mind and then uses one of those hypotheses, once it gets more information, to actually complete the task,” says Mark Harnett, an associate professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.
Jakob Voigts PhD ’17, a former postdoc in Harnett’s lab and now a group leader at the Howard Hughes Medical Institute Janelia Research Campus, is the lead author of the paper, which appears today in Nature Neuroscience.
Ambiguous landmarks
The RSC receives input from the visual cortex, the hippocampal formation, and the anterior thalamus, which it integrates to help guide navigation.
In a 2020 paper, Harnett’s lab found that the RSC uses both visual and spatial information to encode landmarks used for navigation. In that study, the researchers showed that neurons in the RSC of mice integrate visual information about the surrounding environment with spatial feedback of the mice’s own position along a track, allowing them to learn where to find a reward based on landmarks that they saw.
In their new study, the researchers wanted to delve further into how the RSC uses spatial information and situational context to guide navigational decision-making. To do that, the researchers devised a much more complicated navigational task than typically used in mouse studies. They set up a large, round arena, with 16 small openings, or ports, along the side walls. One of these openings would give the mice a reward when they stuck their nose through it. In the first set of experiments, the researchers trained the mice to go to different reward ports indicated by dots of light on the floor that were only visible when the mice get close to them.
Once the mice learned to perform this relatively simple task, the researchers added a second dot. The two dots were always the same distance from each other and from the center of the arena. But now the mice had to go to the port by the counterclockwise dot to get the reward. Because the dots were identical and only became visible at close distances, the mice could never see both dots at once and could not immediately determine which dot was which.
To solve this task, mice therefore had to remember where they expected a dot to show up, integrating their own body position, the direction they were heading, and path they took to figure out which landmark is which. By measuring RSC activity as the mice approached the ambiguous landmarks, the researchers could determine whether the RSC encodes hypotheses about spatial location. The task was carefully designed to require the mice to use the visual landmarks to obtain rewards, instead of other strategies like odor cues or dead reckoning.
“What is important about the behavior in this case is that mice need to remember something and then use that to interpret future input,” says Voigts, who worked on this study while a postdoc in Harnett’s lab. “It’s not just remembering something, but remembering it in such a way that you can act on it.”
The researchers found that as the mice accumulated information about which dot might be which, populations of RSC neurons displayed distinct activity patterns for incomplete information. Each of these patterns appears to correspond to a hypothesis about where the mouse thought it was with respect to the reward.
When the mice get close enough to figure out which dot was indicating the reward port, these patterns collapsed into the one that represents the correct hypothesis. The findings suggest that these patterns not only passively store hypotheses, they can also be used to compute how to get to the correct location, the researchers say.
“We show that RSC has the required information for using this short-term memory to distinguish the ambiguous landmarks. And we show that this type of hypothesis is encoded and processed in a way that allows the RSC to use it to solve the computation,” Voigts says.
Interconnected neurons
When analyzing their initial results, Harnett and Voigts consulted with MIT Professor Ila Fiete, who had run a study about 10 years ago using an artificial neural network to perform a similar navigation task.
That study, previously published on bioRxiv, showed that the neural network displayed activity patterns that were conceptually similar to those seen in the animal studies run by Harnett’s lab. The neurons of the artificial neural network ended up forming highly interconnected low-dimensional networks, like the neurons of the RSC.
“That interconnectivity seems, in ways that we still don’t understand, to be key to how these dynamics emerge and how they’re controlled. And it’s a key feature of how the RSC holds these two hypotheses in mind at the same time,” Harnett says.
In his lab at Janelia, Voigts now plans to investigate how other brain areas involved in navigation, such as the prefrontal cortex, are engaged as mice explore and forage in a more naturalistic way, without being trained on a specific task.
“We’re looking into whether there are general principles by which tasks are learned,” Voigts says. “We have a lot of knowledge in neuroscience about how brains operate once the animal has learned a task, but in comparison we know extremely little about how mice learn tasks or what they choose to learn when given freedom to behave naturally.”
The research was funded, in part, by the National Institutes of Health, a Simons Center for the Social Brain at MIT postdoctoral fellowship, the National Institute of General Medical Sciences, and the Center for Brains, Minds, and Machines at MIT, funded by the National Science Foundation.
Former MIT researchers advance a new model for innovationFocused research organizations (FROs) undertake large research efforts and have begun to yield scientific advances.Academic research groups and startups are essential drivers of scientific progress. But some projects, like the Hubble Space Telescope or the Human Genome Project, are too big for any one academic lab or loose consortium. They’re also not immediately profitable enough for industry to take on.
That’s the gap researchers at MIT were trying to fill when they created the concept of focused research organizations, or FROs. They describe a FRO as a new type of entity, often philanthropically funded, that undertakes large research efforts using tightly coordinated teams to create a public good that accelerates scientific progress.
The original idea for focused research organizations came out of talks among researchers, most of whom were working to map the brain in MIT Professor Ed Boyden’s lab. After they began publishing their ideas, however, the researchers realized FROs could be a powerful tool to unlock scientific advances across many other applications.
“We were quite pleasantly surprised by the range of fields where we see FRO-shaped problems,” says Adam Marblestone, a former MIT research scientist who co-founded the nonprofit Convergent Research to help launch FROs in 2021. “Convergent has FRO proposals from climate, materials science, chemistry, biology — we even have launched a FRO on software for math. You wouldn’t expect math to be something with a large-scale technological research bottleneck, but it turns out even there, we found a software engineering bottleneck that needed to be solved.”
Marblestone helped formulate the idea for focused research organizations at MIT with a group including Andrew Payne SM ’17, PhD ’21 and Sam Rodriques PhD ’19, who were PhD students in Boyden’s lab at the time. Since then, the FRO concept has caught on. Convergent has helped attract philanthropic funding for FROs working to decode the immune system, identify the unintended targets of approved drugs, and understand the impacts of carbon dioxide removal in our oceans.
In total, Convergent has supported the creation of 10 FROs since its founding in 2021. Many of those groups have already released important tools for better understanding our world — and their leaders believe the best is yet to come.
“We’re starting to see these first open-source tools released in important areas,” Marblestone says. “We’re seeing the first concrete evidence that FROs are effective, because no other entity could have released these tools, and I think 2025 is going to be a significant year in terms of our newer FROs putting out new datasets and tools.”
A new model
Marblestone joined Boyden’s lab in 2014 as a research scientist after completing his PhD at Harvard University. He also worked in a new position called director of scientific architecting at the MIT Media Lab, which Boyden helped create, through which he tried to organize individual research efforts into larger projects. His own research focused on overcoming the challenges of measuring brain activity across large scales.
Marblestone discussed this and other large-scale neuroscience problems with Payne and Rodriques, and the researchers began thinking about gaps in scientific funding more broadly.
“The combination of myself, Sam, Andrew, Ed, and others’ experiences trying to start various large brain-mapping projects convinced us of the gap in support for medium-sized science and engineering teams with startup-inspired structures, built for the nonprofit purpose of building scientific infrastructure,” Marblestone says.
Through MIT, the researchers also connected with Tom Kalil, who was at the time chief innovation officer at Schmidt Futures, a philanthropic initiative of Eric and Wendy Schmidt. Rodriques wrote about the concept of a focused research organization as the last chapter of his PhD thesis in 2019.
“Ed always encouraged us to dream very, very big,” Rodriques says. “We were always trying to think about the hardest problems in biology and how to tackle them. My thesis basically ended with me explaining why we needed a new structure that is like a company, but nonprofit and dedicated to science.”
As part of a fellowship with the Federation of American Scientists in 2020, and working with Kalil, Marblestone interviewed scientists in dozens of fields outside of neuroscience and learned that the funding gap existed across disciplines.
When Rodriques and Marblestone published an essay about their findings, it helped attract philanthropic funding, which Marblestone, Kalil, and co-founder Anastasia Gamick used to launch Convergent Research, a nonprofit science studio for launching FROs.
“I see Ed’s lab as a melting pot where myself, Ed, Sam, and others worked on articulating a need and identifying specific projects that might make sense as FROs,” Marblestone says. “All those ideas later got crystallized when we created Convergent Research.”
In 2021, Convergent helped launch the first FROs: E11 Bio, which is led by Payne and committed to developing tools to understand how the brain is wired, and Cultivarium, a FRO making microorganisms more accessible for work in synthetic biology.
“From our brain mapping work we started asking the question, ‘Are there other projects that look like this that aren’t getting funded?’” Payne says. “We realized there was a gap in the research ecosystem, where some of these interdisciplinary, team science projects were being systematically overlooked. We knew a lot of amazing things would come out of getting those projects funded.”
Tools to advance science
Early progress from the first focused research organizations has strengthened Marblestone’s conviction that they’re filling a gap.
[C]Worthy is the FRO building tools to ensure safe, ocean-based carbon dioxide removal. It recently released an interactive map of alkaline activity to improve our understanding of one method for sequestering carbon known as ocean alkalinity enhancement. Last year, a math FRO, Lean, released a programming language and proof assistant that was used by Google’s DeepMind AI lab to solve problems in the International Mathematical Olympiad, achieving the same level as a silver medalist in the competition for the first time. The synthetic biology FRO Cultivarium, in turn, has already released software that can predict growth conditions for microbes based on their genome.
Last year, E11 Bio previewed a new method for mapping the brain called PRISM, which it has used to map out a portion of the mouse hippocampus. It will be making the data and mapping tool available to all researchers in coming months.
“A lot of this early work has proven you can put a really talented team together and move fast to go from zero to one,” Payne says. “The next phase is proving FROs can continue to build on that momentum and develop even more datasets and tools, establish even bigger collaborations, and scale their impact.”
Payne credits Boyden for fostering an ecosystem where researchers could think about problems beyond their narrow area of study.
“Ed’s lab was a really intellectually stimulating, collaborative environment,” Payne says. “He trains his students to think about impact first and work backward. It was a bunch of people thinking about how they were going to change the world, and that made it a particularly good place to develop the FRO idea.”
Marblestone says supporting FROs has been the highest-impact thing he’s been able to do in his career. Still, he believes the success of FROs should be judged over closer to 10-year periods and will depend on not just the tools they produce but also whether they spin out companies, partner with other institutes, and create larger, long-lasting initiatives to deploy what they built.
“We were initially worried people wouldn’t be willing to join these organizations because it doesn’t offer tenure and it doesn’t offer equity in a startup,” Marblestone says. “But we’ve been able to recruit excellent leaders, scientists, engineers, and others to create highly motivated teams. That’s good evidence this is working. As we get strong projects and good results, I hope it will create this flywheel where it becomes easier to fund these ideas, more scientists will come up with them, and I think we’re starting to get there.”
Physicists observe a new form of magnetism for the first timeThe magnetic state offers a new route to “spintronic” memory devices that would be faster and more efficient than their electronic counterparts.MIT physicists have demonstrated a new form of magnetism that could one day be harnessed to build faster, denser, and less power-hungry “spintronic” memory chips.
The new magnetic state is a mash-up of two main forms of magnetism: the ferromagnetism of everyday fridge magnets and compass needles, and antiferromagnetism, in which materials have magnetic properties at the microscale yet are not macroscopically magnetized.
Now, the MIT team has demonstrated a new form of magnetism, termed “p-wave magnetism.”
Physicists have long observed that electrons of atoms in regular ferromagnets share the same orientation of “spin,” like so many tiny compasses pointing in the same direction. This spin alignment generates a magnetic field, which gives a ferromagnet its inherent magnetism. Electrons belonging to magnetic atoms in an antiferromagnet also have spin, although these spins alternate, with electrons orbiting neighboring atoms aligning their spins antiparallel to each other. Taken together, the equal and opposite spins cancel out, and the antiferromagnet does not exhibit macroscopic magnetization.
The team discovered the new p-wave magnetism in nickel iodide (NiI2), a two-dimensional crystalline material that they synthesized in the lab. Like a ferromagnet, the electrons exhibit a preferred spin orientation, and, like an antiferromagnet, equal populations of opposite spins result in a net cancellation. However, the spins on the nickel atoms exhibit a unique pattern, forming spiral-like configurations within the material that are mirror-images of each other, much like the left hand is the right hand’s mirror image.
What’s more, the researchers found this spiral spin configuration enabled them to carry out “spin switching”: Depending on the direction of spiraling spins in the material, they could apply a small electric field in a related direction to easily flip a left-handed spiral of spins into a right-handed spiral of spins, and vice-versa.
The ability to switch electron spins is at the heart of “spintronics,” which is a proposed alternative to conventional electronics. With this approach, data can be written in the form of an electron’s spin, rather than its electronic charge, potentially allowing orders of magnitude more data to be packed onto a device while using far less power to write and read that data.
“We showed that this new form of magnetism can be manipulated electrically,” says Qian Song, a research scientist in MIT’s Materials Research Laboratory. “This breakthrough paves the way for a new class of ultrafast, compact, energy-efficient, and nonvolatile magnetic memory devices.”
Song and his colleagues published their results May 28 in the journal Nature. MIT co-authors include Connor Occhialini, Batyr Ilyas, Emre Ergeçen, Nuh Gedik, and Riccardo Comin, along with Rafael Fernandes at the University of Illinois Urbana-Champaign, and collaborators from multiple other institutions.
Connecting the dots
The discovery expands on work by Comin’s group in 2022. At that time, the team probed the magnetic properties of the same material, nickel iodide. At the microscopic level, nickel iodide resembles a triangular lattice of nickel and iodine atoms. Nickel is the material’s main magnetic ingredient, as the electrons on the nickel atoms exhibit spin, while those on iodine atoms do not.
In those experiments, the team observed that the spins of those nickel atoms were arranged in a spiral pattern throughout the material’s lattice, and that this pattern could spiral in two different orientations.
At the time, Comin had no idea that this unique pattern of atomic spins could enable precise switching of spins in surrounding electrons. This possibility was later raised by collaborator Rafael Fernandes, who along with other theorists was intrigued by a recently proposed idea for a new, unconventional, “p-wave” magnet, in which electrons moving along opposite directions in the material would have their spins aligned in opposite directions.
Fernandes and his colleagues recognized that if the spins of atoms in a material form the geometric spiral arrangement that Comin observed in nickel iodide, that would be a realization of a “p-wave” magnet. Then, when an electric field is applied to switch the “handedness” of the spiral, it should also switch the spin alignment of the electrons traveling along the same direction.
In other words, such a p-wave magnet could enable simple and controllable switching of electron spins, in a way that could be harnessed for spintronic applications.
“It was a completely new idea at the time, and we decided to test it experimentally because we realized nickel iodide was a good candidate to show this kind of p-wave magnet effect,” Comin says.
Spin current
For their new study, the team synthesized single-crystal flakes of nickel iodide by first depositing powders of the respective elements on a crystalline substrate, which they placed in a high-temperature furnace. The process causes the elements to settle into layers, each arranged microscopically in a triangular lattice of nickel and iodine atoms.
“What comes out of the oven are samples that are several millimeters wide and thin, like cracker bread,” Comin says. “We then exfoliate the material, peeling off even smaller flakes, each several microns wide, and a few tens of nanometers thin.”
The researchers wanted to know if, indeed, the spiral geometry of the nickel atoms’s spins would force electrons traveling in opposite directions to have opposite spins, like what Fernandes expected a p-wave magnet should exhibit. To observe this, the group applied to each flake a beam of circularly polarized light — light that produces an electric field that rotates in a particular direction, for instance, either clockwise or counterclockwise.
They reasoned that if travelling electrons interacting with the spin spirals have a spin that is aligned in the same direction, then incoming light, polarized in that same direction, should resonate and produce a characteristic signal. Such a signal would confirm that the traveling electrons’ spins align because of the spiral configuration, and furthermore, that the material does in fact exhibit p-wave magnetism.
And indeed, that’s what the group found. In experiments with multiple nickel iodide flakes, the researchers directly observed that the direction of the electron’s spin was correlated to the handedness of the light used to excite those electrons. Such is a telltale signature of p-wave magnetism, here observed for the first time.
Going a step further, they looked to see whether they could switch the spins of the electrons by applying an electric field, or a small amount of voltage, along different directions through the material. They found that when the direction of the electric field was in line with the direction of the spin spiral, the effect switched electrons along the route to spin in the same direction, producing a current of like-spinning electrons.
“With such a current of spin, you can do interesting things at the device level, for instance, you could flip magnetic domains that can be used for control of a magnetic bit,” Comin explains. “These spintronic effects are more efficient than conventional electronics because you’re just moving spins around, rather than moving charges. That means you’re not subject to any dissipation effects that generate heat, which is essentially the reason computers heat up.”
“We just need a small electric field to control this magnetic switching,” Song adds. “P-wave magnets could save five orders of magnitude of energy. Which is huge.”
“We are excited to see these cutting-edge experiments confirm our prediction of p-wave spin polarized states,” says Libor Šmejkal, head of the Max Planck Research Group in Dresden, Germany, who is one of the authors of the theoretical work that proposed the concept of p-wave magnetism but was not involved in the new paper. “The demonstration of electrically switchable p-wave spin polarization also highlights the promising applications of unconventional magnetic states.”
The team observed p-wave magnetism in nickel iodide flakes, only at ultracold temperatures of about 60 kelvins.
“That’s below liquid nitrogen, which is not necessarily practical for applications,” Comin says. “But now that we’ve realized this new state of magnetism, the next frontier is finding a material with these properties, at room temperature. Then we can apply this to a spintronic device.”
This research was supported, in part, by the National Science Foundation, the Department of Energy, and the Air Force Office of Scientific Research.
MIT students and postdoc explore the inner workings of Capitol HillIn an annual tradition, MIT affiliates embarked on a trip to Washington to explore federal lawmaking and advocate for science policy.This spring, 25 MIT students and a postdoc traveled to Washington, where they met with congressional offices to advocate for federal science funding and specific, science-based policies based on insights from their research on pressing issues — including artificial intelligence, health, climate and ocean science, energy, and industrial decarbonization. Organized annually by the Science Policy Initiative (SPI), this year’s trip came at a particularly critical moment, as science agencies are facing unprecedented funding cuts.
Over the course of two days, the group met with 66 congressional offices across 35 states and select committees, advocating for stable funding for science agencies such as the Department of Energy, the National Oceanic and Atmospheric Administration, the National Science Foundation, NASA, and the Department of Defense.
Congressional Visit Days (CVD), organized by SPI, offer students and researchers a hands-on introduction to federal policymaking. In addition to meetings on Capitol Hill, participants connected with MIT alumni in government and explored potential career paths in science policy.
This year’s trip was co-organized by Mallory Kastner, a PhD student in biological oceanography at MIT and Woods Hole Oceanographic Institution (WHOI), and Julian Ufert, a PhD student in chemical engineering at MIT. Ahead of the trip, participants attended training sessions hosted by SPI, the MIT Washington Office, and the MIT Policy Lab. These sessions covered effective ways to translate scientific findings into policy, strategies for a successful advocacy meeting, and hands-on demos of a congressional meeting.
Participants then contacted their representatives’ offices in advance and tailored their talking points to each office’s committees and priorities. This structure gave participants direct experience initiating policy conversations with those actively working on issues they cared about.
Audrey Parker, a PhD student in civil and environmental engineering studying methane abatement, emphasizes the value of connecting scientific research with priorities in Congress: “Through CVD, I had the opportunity to contribute to conversations on science-backed solutions and advocate for the role of research in shaping policies that address national priorities — including energy, sustainability, and climate change.”
To many of the participants, stepping into the shoes of a policy advisor was a welcome diversion from their academic duties and scientific routine. For Alex Fan, an undergraduate majoring in electrical engineering and computer science, the trip was enlightening: “It showed me that student voices really do matter in shaping science policy. Meeting with lawmakers, especially my own representative, Congresswoman Bonamici, made the experience personal and inspiring. It has made me seriously consider a future at the intersection of research and policy.”
“I was truly impressed by the curiosity and dedication of our participants, as well as the preparation they brought to each meeting,” says Ufert. “It was inspiring to watch them grow into confident advocates, leveraging their experience as students and their expertise as researchers to advise on policy needs.”
Kastner adds: “It was eye-opening to see the disconnect between scientists and policymakers. A lot of knowledge we generate as scientists rarely makes it onto the desk of congressional staff, and even more rarely onto the congressperson’s. CVD was an incredibly empowering experience for me as a scientist — not only am I more motivated to broaden my scientific outreach to legislators, but I now also have the skills to do so.”
Funding is the bedrock that allows scientists to carry out research and make discoveries. In the United States, federal funding for science has enabled major technological breakthroughs and advancements in manufacturing and other industrial sectors, and led to important environmental protection standards. While participants found the degree of support for science funding variable among offices from across the political spectrum, they were reassured by the fact that many offices on both sides of the aisle still recognized the significance of science.
Eight with MIT ties win 2025 Hertz Foundation FellowshipsThe fellowships recognize doctoral students who have “the extraordinary creativity and principled leadership necessary to tackle problems others can’t solve.”The Hertz Foundation announced that it has awarded fellowships to eight MIT affiliates. The prestigious award provides each recipient with five years of doctoral-level research funding (up to a total of $250,000), which gives them an unusual measure of independence in their graduate work to pursue groundbreaking research.
The MIT-affiliated awardees are Matthew Caren ’25; April Qiu Cheng ’24; Arav Karighattam, who begins his PhD at the Institute this fall; Benjamin Lou ’25; Isabelle A. Quaye ’22, MNG ’24; Albert Qin ’24; Ananthan Sadagopan ’24; and Gianfranco (Franco) Yee ’24.
“Hertz Fellows embody the promise of future scientific breakthroughs, major engineering achievements and thought leadership that is vital to our future,” said Stephen Fantone, chair of the Hertz Foundation board of directors and president and CEO of Optikos Corp., in the announcement. “The newest recipients will direct research teams, serve in leadership positions in our government and take the helm of major corporations and startups that impact our communities and the world.”
In addition to funding, fellows receive access to Hertz Foundation programs throughout their lives, including events, mentoring, and networking. They join the ranks of over 1,300 former Hertz Fellows since the fellowship was established in 1963 who are leaders and scholars in a range of technology, science, and engineering fields. Former fellows have contributed to breakthroughs in such areas as advanced medical therapies, computational systems used by billions of people daily, global defense networks, and the recent launch of the James Webb Space Telescope.
This year’s MIT recipients are among a total of 19 Hertz Foundation Fellows scholars selected from across the United States.
Matthew Caren ’25 studied electrical engineering and computer science, mathematics, and music at MIT. His research focuses on computational models of how people use their voices to communicate sound at the Computer Science and Artificial Intelligence Lab (CSAIL) and interpretable real-time machine listening systems at the MIT Music Technology Lab. He spent several summers developing large language model systems and bioinformatics algorithms at Apple and a year researching expressive digital instruments at Stanford University’s Center for Computer Research in Music and Acoustics. He chaired the MIT Schwarzman College of Computing Undergraduate Advisory Group, where he led undergraduate committees on interdisciplinary computing AI and was a founding member of the MIT Voxel Lab for music and arts technology. In addition, Caren has invented novel instruments used by Grammy-winning musicians on international stages. He plans to pursue a doctorate at Stanford.
April Qiu Cheng ’24 majored in physics at MIT, graduating in just three years. Their research focused on black hole phenomenology, gravitational-wave inference, and the use of fast radio bursts as a statistical probe of large-scale structure. They received numerous awards, including an MIT Outstanding Undergraduate Research Award, the MIT Barrett Prize, the Astronaut Scholarship, and the Princeton President’s Fellowship. Cheng contributed to the physics department community by serving as vice president of advocacy for Undergraduate Women in Physics and as the undergraduate representative on the Physics Values Committee. In addition, they have participated in various science outreach programs for middle and high school students. Since graduating, they have been a Fulbright Fellow at the Max Planck Institute for Gravitational Physics, where they have been studying gravitational-wave cosmology. Cheng will begin a doctorate in astrophysics at Princeton in the fall.
Arav Karighattam was home schooled, and by age 14 had completed most of the undergraduate and graduate courses in physics and mathematics at the University of California at Davis. He graduated from Harvard University in 2024 with a bachelor’s degree in mathematics and will attend MIT to pursue a PhD, also in mathematics. Karighattam is fascinated by algebraic number theory and arithmetic geometry and seeks to understand the mysteries underlying the structure of solutions to Diophantine equations. He also wants to apply his mathematical skills to mitigating climate change and biodiversity loss. At a recent conference at MIT titled “Mordell’s Conjecture 100 Years Later,” Karighattam distinguished himself as the youngest speaker to present a paper among graduate students, postdocs, and faculty members.
Benjamin Lou ’25 graduated from MIT in May with a BS in physics and is interested in finding connections between fundamental truths of the universe. One of his research projects applies symplectic techniques to understand the nature of precision measurements using quantum states of light. Another is about geometrically unifying several theorems in quantum mechanics using the Prüfer transformation. For his work, Lou was honored with the Barry Goldwater Scholarship. Lou will pursue his doctorate at MIT, where he plans to work on unifying quantum mechanics and gravity, with an eye toward uncovering experimentally testable predictions. Living with the debilitating disease spinal muscular atrophy, which causes severe, full-body weakness and makes scratchwork unfeasible, Lou has developed a unique learning style emphasizing mental visualization. He also co-founded and helped lead the MIT Assistive Technology Club, dedicated to empowering those with disabilities using creative technologies. He is working on a robotic self-feeding device for those who cannot eat independently.
Isabelle A. Quaye ’22, MNG ’24 studied electrical engineering and computer science as an undergraduate at MIT, with a minor in economics. She was awarded competitive fellowships and scholarships from Hyundai, Intel, D. E. Shaw, and Palantir, and received the Albert G. Hill Prize, given to juniors and seniors who have maintained high academic standards and have made continued contributions to improving the quality of life for underrepresented students at MIT. While obtaining her master’s degree at MIT, she focused on theoretical computer science and systems. She is currently a software engineer at Apple, where she continues to develop frameworks that harness intelligence from data to improve systems and processes. Quaye also believes in contributing to the advancement of science and technology through teaching and has volunteered in summer programs to teach programming and informatics to high school students in the United States and Ghana.
Albert Qin ’24 majored in physics and mathematics at MIT. He also pursued an interest in biology, researching single-molecule approaches to study transcription factor diffusion in living cells and studying the cell circuits that control animal development. His dual interests have motivated him to find common ground between physics and biological fields. Inspired by his MIT undergraduate advisors, he hopes to become a teacher and mentor for aspiring young scientists. Qin is currently pursuing a PhD at Princeton University, addressing questions about the behavior of neural networks — both artificial and biological — using a variety of approaches and ideas from physics and neuroscience.
Ananthan Sadagopan ’24 is currently pursuing a doctorate in biological and biomedical science at Harvard University, focusing on chemical biology and the development of new therapeutic strategies for intractable diseases. He earned his BS at MIT in chemistry and biology in three years and led projects characterizing somatic perturbations of X chromosome inactivation in cancer, developing machine learning tools for cancer dependency prediction, using small molecules for targeted protein relocalization and creating a generalizable strategy to drug the most mutated gene in cancer (TP53). He published as the first author in top journals, such as Cell, during his undergraduate career. He also holds patents related to his work on cancer dependency prediction and drugging TP53. While at the Institute, he served as president of the Chemistry Undergraduate Association, winning both the First-Year and Senior Chemistry Achievement Awards, and was head of the events committee for the MIT Science Olympiad.
Gianfranco (Franco) Yee ’24 majored in biological engineering at MIT, conducting research in the Manalis Lab on chemical gradients in the gut microenvironment and helping to develop a novel gut-on-a-chip platform for culturing organoids under these gradients. His senior thesis extended this work to the microbiome, investigating host-microbe interactions linked to intestinal inflammation and metabolic disorders. Yee also earned a concentration in education at MIT, and is committed to increasing access to STEM resources in underserved communities. He co-founded Momentum AI, an educational outreach program that teaches computer science to high school students across Greater Boston. The inaugural program served nearly 100 students and included remote outreach efforts in Ukraine and China. Yee has also worked with MIT Amphibious Achievement and the MIT Office of Engineering Outreach Programs. He currently attends Gerstner Sloan Kettering Graduate School, where he plans to leverage the gut microbiome and immune system to develop innovative therapeutic treatments.
Former Hertz Fellows include two Nobel laureates; recipients of 11 Breakthrough Prizes and three MacArthur Foundation “genius awards;” and winners of the Turing Award, the Fields Medal, the National Medal of Technology, the National Medal of Science, and the Wall Street Journal Technology Innovation Award. In addition, 54 are members of the National Academies of Sciences, Engineering and Medicine, and 40 are fellows of the American Association for the Advancement of Science. Hertz Fellows hold over 3,000 patents, have founded more than 375 companies, and have created hundreds of thousands of science and technology jobs.
$20 million gift supports theoretical physics research and education at MIT Gift from the Leinweber Foundation, in addition to a $5 million commitment from the School of Science, will drive discovery, collaboration, and the next generation of physics leaders.A $20 million gift from the Leinweber Foundation, in addition to a $5 million commitment from the MIT School of Science, will support theoretical physics research and education at MIT.
Leinweber Foundation gifts to five institutions, totaling $90 million, will establish the newly renamed MIT Center for Theoretical Physics – A Leinweber Institute within the Department of Physics, affiliated with the Laboratory for Nuclear Science at the School of Science, as well as Leinweber Institutes for Theoretical Physics at three other top research universities: the University of Michigan, the University of California at Berkeley, and the University of Chicago, as well as a Leinweber Forum for Theoretical and Quantum Physics at the Institute for Advanced Study.
“MIT has one of the strongest and broadest theory groups in the world,” says Professor Washington Taylor, the director of the newly funded center and a leading researcher in string theory and its connection to observable particle physics and cosmology.
“This landmark endowment from the Leinweber Foundation will enable us to support the best graduate students and postdoctoral researchers to develop their own independent research programs and to connect with other researchers in the Leinweber Institute network. By pledging to support this network and fundamental curiosity-driven science, Larry Leinweber and his family foundation have made a huge contribution to maintaining a thriving scientific enterprise in the United States in perpetuity.”
The Leinweber Foundation’s investment across five institutions — constituting the largest philanthropic commitment ever for theoretical physics research, according to the Science Philanthropy Alliance, a nonprofit organization that supports philanthropic support for science — will strengthen existing programs at each institution and foster collaboration across the universities. Recipient institutions will work both independently and collaboratively to explore foundational questions in theoretical physics. Each institute will continue to shape its own research focus and programs, while also committing to big-picture cross-institutional convenings around topics of shared interest. Moreover, each institute will have significantly more funding for graduate students and postdocs, including fellowship support for three to eight fully endowed Leinweber Physics Fellows at each institute.
“This gift is a commitment to America’s scientific future,” says Larry Leinweber, founder and president of the Leinweber Foundation. “Theoretical physics may seem abstract to many, but it is the tip of the spear for innovation. It fuels our understanding of how the world works and opens the door to new technologies that can shape society for generations. As someone who has had a lifelong fascination with theoretical physics, I hope this investment not only strengthens U.S. leadership in basic science, but also inspires curiosity, creativity, and groundbreaking discoveries for generations to come.”
The gift to MIT will create a postdoc program that, once fully funded, will initially provide support for up to six postdocs, with two selected per year for a three-year program. In addition, the gift will provide student financial support, including fellowship support, for up to six graduate students per year studying theoretical physics. The goal is to attract the top talent to the MIT Center for Theoretical Physics – A Leinweber Institute and support the ongoing research programs in a more robust way.
A portion of the funding will also provide support for visitors, seminars, and other scholarly activities of current postdocs, faculty, and students in theoretical physics, as well as helping with administrative support.
“Graduate students are the heart of our country’s scientific research programs. Support for their education to become the future leaders of the field is essential for the advancement of the discipline,” says Nergis Mavalvala, dean of the MIT School of Science and the Curtis (1963) and Kathleen Marble Professor of Astrophysics.
The Leinweber Foundation gift is the second significant gift for the center. “We are always grateful to Virgil Elings, whose generous gift helped make possible the space that houses the center,” says Deepto Chakrabarty, head of the Department of Physics. Elings PhD ’66, co-founder of Digital Instruments, which designed and sold scanning probe microscopes, made his gift more than 20 years ago to support a space for theoretical physicists to collaborate.
“Gifts like those from Larry Leinweber and Virgil Elings are critical, especially now in this time of uncertain funding from the federal government for support of fundamental scientific research carried out by our nation’s leading postdocs, research scientists, faculty and students,” adds Mavalvala.
Professor Tracy Slatyer, whose work is motivated by questions of fundamental particle physics — particularly the nature and interactions of dark matter — will be the subsequent director of the MIT Center for Theoretical Physics – A Leinweber Institute beginning this fall. Slatyer will join Mavalvala, Taylor, Chakrabarty, and the entirety of the theoretical physics community for a dedication ceremony planned for the near future.
The Leinweber Foundation was founded in 2015 by software entrepreneur Larry Leinweber, and has worked with the Science Philanthropy Alliance since 2021 to shape its philanthropic strategy. “It’s been a true pleasure to work with Larry and the Leinweber family over the past four years and to see their vision take shape,” says France Córdova, president of the Science Philanthropy Alliance. “Throughout his life, Larry has exemplified curiosity, intellectual openness, and a deep commitment to learning. This gift reflects those values, ensuring that generations of scientists will have the freedom to explore, to question, and to pursue ideas that could change how we understand the universe.”
Overlooked cells might explain the human brain’s huge storage capacityMIT researchers developed a new model of memory that includes critical contributions from astrocytes, a class of brain cells.The human brain contains about 86 billion neurons. These cells fire electrical signals that help the brain store memories and send information and commands throughout the brain and the nervous system.
The brain also contains billions of astrocytes — star-shaped cells with many long extensions that allow them to interact with millions of neurons. Although they have long been thought to be mainly supportive cells, recent studies have suggested that astrocytes may play a role in memory storage and other cognitive functions.
MIT researchers have now put forth a new hypothesis for how astrocytes might contribute to memory storage. The architecture suggested by their model would help to explain the brain’s massive storage capacity, which is much greater than would be expected using neurons alone.
“Originally, astrocytes were believed to just clean up around neurons, but there’s no particular reason that evolution did not realize that, because each astrocyte can contact hundreds of thousands of synapses, they could also be used for computation,” says Jean-Jacques Slotine, an MIT professor of mechanical engineering and of brain and cognitive sciences, and an author of the new study.
Dmitry Krotov, a research staff member at the MIT-IBM Watson AI Lab and IBM Research, is the senior author of the open-access paper, which appeared May 23 in the Proceedings of the National Academy of Sciences. Leo Kozachkov PhD ’22 is the paper’s lead author.
Memory capacity
Astrocytes have a variety of support functions in the brain: They clean up debris, provide nutrients to neurons, and help to ensure an adequate blood supply.
Astrocytes also send out many thin tentacles, known as processes, which can each wrap around a single synapse — the junctions where two neurons interact with each other — to create a tripartite (three-part) synapse.
Within the past couple of years, neuroscientists have shown that if the connections between astrocytes and neurons in the hippocampus are disrupted, memory storage and retrieval are impaired.
Unlike neurons, astrocytes can’t fire action potentials, the electrical impulses that carry information throughout the brain. However, they can use calcium signaling to communicate with other astrocytes. Over the past few decades, as the resolution of calcium imaging has improved, researchers have found that calcium signaling also allows astrocytes to coordinate their activity with neurons in the synapses that they associate with.
These studies suggest that astrocytes can detect neural activity, which leads them to alter their own calcium levels. Those changes may trigger astrocytes to release gliotransmitters — signaling molecules similar to neurotransmitters — into the synapse.
“There’s a closed circle between neuron signaling and astrocyte-to-neuron signaling,” Kozachkov says. “The thing that is unknown is precisely what kind of computations the astrocytes can do with the information that they’re sensing from neurons.”
The MIT team set out to model what those connections might be doing and how they might contribute to memory storage. Their model is based on Hopfield networks — a type of neural network that can store and recall patterns.
Hopfield networks, originally developed by John Hopfield and Shun-Ichi Amari in the 1970s and 1980s, are often used to model the brain, but it has been shown that these networks can’t store enough information to account for the vast memory capacity of the human brain. A newer, modified version of a Hopfield network, known as dense associative memory, can store much more information through a higher order of couplings between more than two neurons.
However, it is unclear how the brain could implement these many-neuron couplings at a hypothetical synapse, since conventional synapses only connect two neurons: a presynaptic cell and a postsynaptic cell. This is where astrocytes come into play.
“If you have a network of neurons, which couple in pairs, there’s only a very small amount of information that you can encode in those networks,” Krotov says. “In order to build dense associative memories, you need to couple more than two neurons. Because a single astrocyte can connect to many neurons, and many synapses, it is tempting to hypothesize that there might exist an information transfer between synapses mediated by this biological cell. That was the biggest inspiration for us to look into astrocytes and led us to start thinking about how to build dense associative memories in biology.”
The neuron-astrocyte associative memory model that the researchers developed in their new paper can store significantly more information than a traditional Hopfield network — more than enough to account for the brain’s memory capacity.
Intricate connections
The extensive biological connections between neurons and astrocytes offer support for the idea that this type of model might explain how the brain’s memory storage systems work, the researchers say. They hypothesize that within astrocytes, memories are encoded by gradual changes in the patterns of calcium flow. This information is conveyed to neurons by gliotransmitters released at synapses that astrocyte processes connect to.
“By careful coordination of these two things — the spatial temporal pattern of calcium in the cell and then the signaling back to the neurons — you can get exactly the dynamics you need for this massively increased memory capacity,” Kozachkov says.
One of the key features of the new model is that it treats astrocytes as collections of processes, rather than a single entity. Each of those processes can be considered one computational unit. Because of the high information storage capabilities of dense associative memories, the ratio of the amount of information stored to the number of computational units is very high and grows with the size of the network. This makes the system not only high capacity, but also energy efficient.
“By conceptualizing tripartite synaptic domains — where astrocytes interact dynamically with pre- and postsynaptic neurons — as the brain’s fundamental computational units, the authors argue that each unit can store as many memory patterns as there are neurons in the network. This leads to the striking implication that, in principle, a neuron-astrocyte network could store an arbitrarily large number of patterns, limited only by its size,” says Maurizio De Pitta, an assistant professor of physiology at the Krembil Research Institute at the University of Toronto, who was not involved in the study.
To test whether this model might accurately represent how the brain stores memory, researchers could try to develop ways to precisely manipulate the connections between astrocytes’ processes, then observe how those manipulations affect memory function.
“We hope that one of the consequences of this work could be that experimentalists would consider this idea seriously and perform some experiments testing this hypothesis,” Krotov says.
In addition to offering insight into how the brain may store memory, this model could also provide guidance for researchers working on artificial intelligence. By varying the connectivity of the process-to-process network, researchers could generate a huge range of models that could be explored for different purposes, for instance, creating a continuum between dense associative memories and attention mechanisms in large language models.
“While neuroscience initially inspired key ideas in AI, the last 50 years of neuroscience research have had little influence on the field, and many modern AI algorithms have drifted away from neural analogies,” Slotine says. “In this sense, this work may be one of the first contributions to AI informed by recent neuroscience research.”
Why are some rocks on the moon highly magnetic? MIT scientists may have an answerA large impact could have briefly amplified the moon’s weak magnetic field, creating a momentary spike that was recorded in some lunar rocks.Where did the moon’s magnetism go? Scientists have puzzled over this question for decades, ever since orbiting spacecraft picked up signs of a high magnetic field in lunar surface rocks. The moon itself has no inherent magnetism today.
Now, MIT scientists may have solved the mystery. They propose that a combination of an ancient, weak magnetic field and a large, plasma-generating impact may have temporarily created a strong magnetic field, concentrated on the far side of the moon.
In a study appearing today in the journal Science Advances, the researchers show through detailed simulations that an impact, such as from a large asteroid, could have generated a cloud of ionized particles that briefly enveloped the moon. This plasma would have streamed around the moon and concentrated at the opposite location from the initial impact. There, the plasma would have interacted with and momentarily amplified the moon’s weak magnetic field. Any rocks in the region could have recorded signs of the heightened magnetism before the field quickly died away.
This combination of events could explain the presence of highly magnetic rocks detected in a region near the south pole, on the moon’s far side. As it happens, one of the largest impact basins — the Imbrium basin — is located in the exact opposite spot on the near side of the moon. The researchers suspect that whatever made that impact likely released the cloud of plasma that kicked off the scenario in their simulations.
“There are large parts of lunar magnetism that are still unexplained,” says lead author Isaac Narrett, a graduate student in the MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS). “But the majority of the strong magnetic fields that are measured by orbiting spacecraft can be explained by this process — especially on the far side of the moon.”
Narrett’s co-authors include Rona Oran and Benjamin Weiss at MIT, along with Katarina Miljkovic at Curtin University, Yuxi Chen and Gábor Tóth at the University of Michigan at Ann Arbor, and Elias Mansbach PhD ’24 at Cambridge University. Nuno Loureiro, professor of nuclear science and engineering at MIT, also contributed insights and advice.
Beyond the sun
Scientists have known for decades that the moon holds remnants of a strong magnetic field. Samples from the surface of the moon, returned by astronauts on NASA’s Apollo missions of the 1960s and 70s, as well as global measurements of the moon taken remotely by orbiting spacecraft, show signs of remnant magnetism in surface rocks, especially on the far side of the moon.
The typical explanation for surface magnetism is a global magnetic field, generated by an internal “dynamo,” or a core of molten, churning material. The Earth today generates a magnetic field through a dynamo process, and it’s thought that the moon once may have done the same, though its much smaller core would have produced a much weaker magnetic field that may not explain the highly magnetized rocks observed, particularly on the moon’s far side.
An alternative hypothesis that scientists have tested from time to time involves a giant impact that generated plasma, which in turn amplified any weak magnetic field. In 2020, Oran and Weiss tested this hypothesis with simulations of a giant impact on the moon, in combination with the solar-generated magnetic field, which is weak as it stretches out to the Earth and moon.
In simulations, they tested whether an impact to the moon could amplify such a solar field, enough to explain the highly magnetic measurements of surface rocks. It turned out that it wasn’t, and their results seemed to rule out plasma-induced impacts as playing a role in the moon’s missing magnetism.
A spike and a jitter
But in their new study, the researchers took a different tack. Instead of accounting for the sun’s magnetic field, they assumed that the moon once hosted a dynamo that produced a magnetic field of its own, albeit a weak one. Given the size of its core, they estimated that such a field would have been about 1 microtesla, or 50 times weaker than the Earth’s field today.
From this starting point, the researchers simulated a large impact to the moon’s surface, similar to what would have created the Imbrium basin, on the moon’s near side. Using impact simulations from Katarina Miljkovic, the team then simulated the cloud of plasma that such an impact would have generated as the force of the impact vaporized the surface material. They adapted a second code, developed by collaborators at the University of Michigan, to simulate how the resulting plasma would flow and interact with the moon’s weak magnetic field.
These simulations showed that as a plasma cloud arose from the impact, some of it would have expanded into space, while the rest would stream around the moon and concentrate on the opposite side. There, the plasma would have compressed and briefly amplified the moon’s weak magnetic field. This entire process, from the moment the magnetic field was amplified to the time that it decays back to baseline, would have been incredibly fast — somewhere around 40 minutes, Narrett says.
Would this brief window have been enough for surrounding rocks to record the momentary magnetic spike? The researchers say, yes, with some help from another, impact-related effect.
They found that an Imbrium-scale impact would have sent a pressure wave through the moon, similar to a seismic shock. These waves would have converged to the other side, where the shock would have “jittered” the surrounding rocks, briefly unsettling the rocks’ electrons — the subatomic particles that naturally orient their spins to any external magnetic field. The researchers suspect the rocks were shocked just as the impact’s plasma amplified the moon’s magnetic field. As the rocks’ electrons settled back, they assumed a new orientation, in line with the momentary high magnetic field.
“It’s as if you throw a 52-card deck in the air, in a magnetic field, and each card has a compass needle,” Weiss says. “When the cards settle back to the ground, they do so in a new orientation. That’s essentially the magnetization process.”
The researchers say this combination of a dynamo plus a large impact, coupled with the impact’s shockwave, is enough to explain the moon’s highly magnetized surface rocks — particularly on the far side. One way to know for sure is to directly sample the rocks for signs of shock, and high magnetism. This could be a possibility, as the rocks lie on the far side, near the lunar south pole, where missions such as NASA’s Artemis program plan to explore.
“For several decades, there’s been sort of a conundrum over the moon’s magnetism — is it from impacts or is it from a dynamo?” Oran says. “And here we’re saying, it’s a little bit of both. And it’s a testable hypothesis, which is nice.”
The team’s simulations were carried out using the MIT SuperCloud. This research was supported, in part, by NASA.
MIT physicists discover a new type of superconductor that’s also a magnetThe “one-of-a-kind” phenomenon was observed in ordinary graphite.Magnets and superconductors go together like oil and water — or so scientists have thought. But a new finding by MIT physicists is challenging this century-old assumption.
In a paper appearing today in the journal Nature, the physicists report that they have discovered a “chiral superconductor” — a material that conducts electricity without resistance, and also, paradoxically, is intrinsically magnetic. What’s more, they observed this exotic superconductivity in a surprisingly ordinary material: graphite, the primary material in pencil lead.
Graphite is made from many layers of graphene — atomically thin, lattice-like sheets of carbon atoms — that are stacked together and can easily flake off when pressure is applied, as when pressing down to write on a piece of paper. A single flake of graphite can contain several million sheets of graphene, which are normally stacked such that every other layer aligns. But every so often, graphite contains tiny pockets where graphene is stacked in a different pattern, resembling a staircase of offset layers.
The MIT team has found that when four or five sheets of graphene are stacked in this “rhombohedral” configuration, the resulting structure can exhibit exceptional electronic properties that are not seen in graphite as a whole.
In their new study, the physicists isolated microscopic flakes of rhombohedral graphene from graphite, and subjected the flakes to a battery of electrical tests. They found that when the flakes are cooled to 300 millikelvins (about -273 degrees Celsius), the material turns into a superconductor, meaning that any electrical current passing through the material can flow through without resistance.
They also found that when they swept an external magnetic field up and down, the flakes could be switched between two different superconducting states, just like a magnet. This suggests that the superconductor has some internal, intrinsic magnetism. Such switching behavior is absent in other superconductors.
“The general lore is that superconductors do not like magnetic fields,” says Long Ju, assistant professor of physics at MIT. “But we believe this is the first observation of a superconductor that behaves as a magnet with such direct and simple evidence. And that’s quite a bizarre thing because it is against people’s general impression on superconductivity and magnetism.”
Ju is senior author of the study, which includes MIT co-authors Tonghang Han, Zhengguang Lu, Zach Hadjri, Lihan Shi, Zhenghan Wu, Wei Xu, Yuxuan Yao, Jixiang Yang, Junseok Seo, Shenyong Ye, Muyang Zhou, and Liang Fu, along with collaborators from Florida State University, the University of Basel in Switzerland, and the National Institute for Materials Science in Japan.
Graphene twist
In everyday conductive materials, electrons flow through in a chaotic scramble, whizzing by each other, and pinging off the material’s atomic latticework. Each time an electron scatters off an atom, it has, in essence, met some resistance, and loses some energy as a result, normally in the form of heat. In contrast, when certain materials are cooled to ultracold temperatures, they can become superconducting, meaning that the material can allow electrons to pair up, in what physicists term “Cooper pairs.” Rather than scattering away, these electron pairs glide through a material without resistance. With a superconductor, then, no energy is lost in translation.
Since superconductivity was first observed in 1911, physicists have shown many times over that zero electrical resistance is a hallmark of a superconductor. Another defining property was first observed in 1933, when the physicist Walther Meissner discovered that a superconductor will expel an external magnetic field. This “Meissner effect” is due in part to a superconductor’s electron pairs, which collectively act to push away any magnetic field.
Physicists have assumed that all superconducting materials should exhibit both zero electrical resistance, and a natural magnetic repulsion. Indeed, these two properties are what could enable Maglev, or “magnetic levitation” trains, whereby a superconducting rail repels and therefore levitates a magnetized car.
Ju and his colleagues had no reason to question this assumption as they carried out their experiments at MIT. In the last few years, the team has been exploring the electrical properties of pentalayer rhombohedral graphene. The researchers have observed surprising properties in the five-layer, staircase-like graphene structure, most recently that it enables electrons to split into fractions of themselves. This phenomenon occurs when the pentalayer structure is placed atop a sheet of hexagonal boron nitride (a material similar to graphene), and slightly offset by a specific angle, or twist.
Curious as to how electron fractions might change with changing conditions, the researchers followed up their initial discovery with similar tests, this time by misaligning the graphene and hexagonal boron nitride structures. To their surprise, they found that when they misaligned the two materials and sent an electrical current through, at temperatures less than 300 millikelvins, they measured zero resistance. It seemed that the phenomenon of electron fractions disappeared, and what emerged instead was superconductivity.
The researchers went a step further to see how this new superconducting state would respond to an external magnetic field. They applied a magnet to the material, along with a voltage, and measured the electrical current coming out of the material. As they dialed the magnetic field from negative to positive (similar to a north and south polarity) and back again, they observed that the material maintained its superconducting, zero-resistance state, except in two instances, once at either magnetic polarity. In these instances, the resistance briefly spiked, before switching back to zero, and returning to a superconducting state.
“If this were a conventional superconductor, it would just remain at zero resistance, until the magnetic field reaches a critical point, where superconductivity would be killed,” Zach Hadjri, a first-year student in the group, says. “Instead, this material seems to switch between two superconducting states, like a magnet that starts out pointing upward, and can flip downwards when you apply a magnetic field. So it looks like this is a superconductor that also acts like a magnet. Which doesn’t make any sense!”
“One of a kind”
As counterintuitive as the discovery may seem, the team observed the same phenomenon in six similar samples. They suspect that the unique configuration of rhombohedral graphene is the key. The material has a very simple arrangement of carbon atoms. When cooled to ultracold temperatures, the thermal fluctuation is minimized, allowing any electrons flowing through the material to slow down, sense each other, and interact.
Such quantum interactions can lead electrons to pair up and superconduct. These interactions can also encourage electrons to coordinate. Namely, electrons can collectively occupy one of two opposite momentum states, or “valleys.” When all electrons are in one valley, they effectively spin in one direction, versus the opposite direction. In conventional superconductors, electrons can occupy either valley, and any pair of electrons is typically made from electrons of opposite valleys that cancel each other out. The pair overall then, has zero momentum, and does not spin.
In the team’s material structure, however, they suspect that all electrons interact such that they share the same valley, or momentum state. When electrons then pair up, the superconducting pair overall has a “non-zero” momentum, and spinning, that, along with many other pairs, can amount to an internal, superconducting magnetism.
“You can think of the two electrons in a pair spinning clockwise, or counterclockwise, which corresponds to a magnet pointing up, or down,” Tonghang Han, a fifth-year student in the group, explains. “So we think this is the first observation of a superconductor that behaves as a magnet due to the electrons’ orbital motion, which is known as a chiral superconductor. It’s one of a kind. It is also a candidate for a topological superconductor which could enable robust quantum computation.”
“Everything we’ve discovered in this material has been completely out of the blue,” says Zhengguang Lu, a former postdoc in the group and now an assistant professor at Florida State University. “But because this is a simple system, we think we have a good chance of understanding what is going on, and could demonstrate some very profound and deep physics principles.”
“It is truly remarkable that such an exotic chiral superconductor emerges from such simple ingredients,” adds Liang Fu, professor of physics at MIT. “Superconductivity in rhombodedral graphene will surely have a lot to offer.”
The part of the research carried out at MIT was supported by the U.S. Department of Energy and a MathWorks Fellowship. This research was carried out, in part, using facilities at MIT.nano.